In this episode of McKinsey on Building Products, a podcast dedicated to the exploration of software product management and engineering, McKinsey partner Rikki Singh spoke to Dr. Ya Xu, vice president of engineering and head of data and AI at LinkedIn. They discuss strategies engineering leaders can undertake to prioritize AI use cases, advance the development of AI technologies, and weave AI initiatives into the fabric of the company’s culture. This interview took place in August 2023 as part of the McKinsey Engineering Academy speaker series. An abridged version of their conversation follows.
A career in AI
Rikki Singh: Tell us about your career journey so far, and what your role as the head of data and AI at LinkedIn entails.
Dr. Ya Xu: I always joke that I've worked for only one company: Microsoft. I went from Microsoft to LinkedIn, but several years later, Microsoft acquired LinkedIn, so that’s my whole résumé. Now I lead the central global team that’s responsible for all of LinkedIn’s data and AI innovations.
Behind LinkedIn’s feed is feed ranking and feed relevance, which tries to put the most relevant information at the top of the feed. That has AI and ML [machine learning] behind it, and that work is done by my team. In fact, a lot of the products that you interact with on LinkedIn have a heavy AI component behind them, such as the job search function and the “people you may know” recommendation.
My team also uses data to help LinkedIn make better higher-level, strategic decisions. For instance, we can use data to decide if we should invest in one product over another, or if there are more opportunities within our current product.
Rikki Singh: What would your advice be for engineering leaders who are starting their own AI/ML journeys?
Dr. Ya Xu: If you have ever been interested in this field, now is the time to enter it. From a technology standpoint, today, relative to ten years ago, there are much better open-source platforms and technologies that you can use to get started. You don’t need to have a PhD in AI to get into the field. Frameworks such as PyTorch or TensorFlow allow people to get a head start with limited domain expertise in the AI field. As the AI field gets more mature, the AI development process will convert into more of a software development process.
Prioritizing AI use cases to drive innovation
Rikki Singh: Where can engineering leaders who want to leverage more ML technologies start?
Dr. Ya Xu: It depends on where they’re starting. Even if organizations don’t have any sophisticated AI workflows, they usually have sophisticated data workflows. I deliberately differentiate them. When I talk about data workflows, I mean logging the data, having a way of processing large data offline, and recovering that data through various parts of the product. These capabilities can provide good foundations to introduce ML and AI workflows.
Subscribe to the McKinsey on Building Products podcast
For general data workflows, engineers could build the data pipelines and rarely change them. But ML workflows change all the time because they try to optimize objective functions. So if a company has a tool or platform that can’t support quick iterations, it could be a bottleneck. I think it depends very much on where you are as an organization.
Rikki Singh: Which use cases should companies try to tackle first?
Dr. Ya Xu: Simply, they should prioritize what they have always prioritized in terms of product development. For example, we prioritize based on the ROI. When we introduce AI to a product, what is the return? If we’re investing in a new technology, will the return span a much longer duration?
When it comes to investing in AI workflows, it’s easy for companies to go all in. They think they will immediately have more usage of AI and will transform all their products to be AI-focused. But that’s difficult. In early stages, companies don’t know how their users and customers would react to it, if it makes sense with their product-market fit, or if it is solving customers’ problems.
Instead, is it possible to test an AI workflow for only 1 percent of the users to see how they react? If we get early signals, then we can determine what kinds of resources we’ll need and if we should stage this build using a bigger model. Start with an initiative that can prove that a certain AI feature can bring a better experience and value to your customer, and then add to it. For example, add more real-time features or train the models incrementally online in addition to offline.
Rikki Singh: You talked about the impact side of things, but often the other priority for companies is feasibility. What should people think about from a feasibility perspective when they are prioritizing?
Dr. Ya Xu: All of us build products with constraints. To me, constraint is a parent of innovation. If your feasibility constraint is having only two people to work on a project, or not having the compute resources to do an initiative, then maybe you can see what open-source software you can leverage or try training small increments of data at a time. So instead of wondering if a project is feasible, try pushing the boundaries on innovation.
Leveraging AI at scale
Rikki Singh: What do you consider to be the fundamental elements that help an organization mature its AI or ML capabilities?
Dr. Ya Xu: There are three. The first is the platform, specifically either an A/B testing experimentation platform or ML platform tools to help facilitate workflows.
More than ten years ago, when I joined LinkedIn, we had just started to realize the importance of AI as a company. In early stages, the ROI for AI was not very apparent to finance teams despite KPI improvements. So we rolled back the feature so they could see a dip. Now we think about that story during every product iteration.
AI innovation has to be proved, which is difficult because so many AI capabilities are an iterative process. You start with a model, then slowly and constantly improve the model by about 1 percent each time using better data and better feature engineering. In aggregate, these gains can be huge, but there is no way to show that. Leaders want to see the numbers, so that’s why having the right experimentation platform is important for any company that develops products, but even more so for companies that need to invest heavily in AI.
Second is expertise. People who have created AI and ML models before and who have seen how it’s done can offer a lot of guidance on how to build, train, and improve those models. It’s a heavily technical field. To take it to the next level, expertise matters.
Third is culture. Culturally speaking, companies should be more data-oriented. For example, having an A/B testing platform is showing that leadership has embraced data and trusts what the data is showing. To launch one model, you have to have many hypotheses and be able to test whether those hypotheses are correct. Having a hypothesis-driven culture is important for any organization that does AI work.
Rikki Singh: Creating a hypothesis-driven culture may be one challenge that an engineering leader would run into as they’re maturing and growing on this journey. Are there other challenges that come to mind?
Dr. Ya Xu: Another challenge is designing with other functions in mind. That’s more important now than ever. Ten years ago, our AI team would optimize mostly toward its objective function. But as we matured, we had to revisit our metrics.
For example, when we made our “people you might know” recommendation, the objective function was to optimize toward the invitation to connect. We would give new users good recommendations on who to connect with in their network. When we started building those models, we realized that optimizing only for requests is not sufficient because it does not consider the other end. We started to ask, why should we optimize toward invitation sent? We should really optimize toward the invitation accepts.
Moreover, metrics have to be much more thoughtful. Companies should decide if the metrics they choose make sense and if they’re going to help create holistic value. AI is not just a component of the product; the product itself is AI. On one hand, sales and marketing teams will focus on how to talk about it to customers. On the other hand, PMs are focusing on providing the right user experience. We are at the stage after we’ve designed and developed a product where everyone needs to be involved in the journey more than ever.
Rikki Singh: If we think about the product being AI, what is the right model for setting up core AI, data science, and data engineering teams, and also for the PDLC [product development life cycle]?
Dr. Ya Xu: Every company has a different model. LinkedIn, for instance, always has a team when we are developing a product, and the leadership team is cross-functionally brought together. So we have a PM, an engineer, an AI engineer, a data scientist, and maybe even a marketer on the same team.
Rikki Singh: Does the central team also set up the AI standards?
Dr. Ya Xu: Correct. I am also responsible for ensuring that we bring in the right AI innovations, push the right boundaries, and set up the right technology roadmap so we can achieve our goals. We also externally communicate our AI principles.
Fostering responsible AI
Rikki Singh: What do you see as the impact of generative AI?
Dr. Ya Xu: I’m excited about generative AI. It’s helping me be more efficient and more productive. At LinkedIn, we have already launched a few products that help people write their posts, job descriptions, and profiles better, so that’s exciting.
At the same time, with any new technology, there are always risks to watch out for. Think about the first car, for example. I’m sure there were no seat belts or stop signs on the road. But instead of deciding not to use the car and go back to our horses, we chose to make cars safer by enforcing speed limits, creating seat belts, and building roads better.
The technology is still new, so we need to build it with the right safety measures in place.
Rikki Singh: Especially with generative AI, there are concerns around IP leakage and privacy. What are some practices an engineering leader for an organization should adopt to foster responsible AI?
Dr. Ya Xu: First, it’s important to have strong principles. For example, at LinkedIn, we have five responsible AI principles: fairness, creating economic opportunity, privacy, accountability, and transparency. We’ve also translated that into guidelines to help guide engineers.
Second, it’s important to remember to put people first when creating AI responsibly. When you use a people-centric perspective, you’ll realize AI is just one component of the project. For example, we try to make LinkedIn’s recruiter search function as fair as possible, but we also don’t have control of the potential human perceptions or unconscious bias that people bring when they are searching on LinkedIn Recruiter. So we provided a product feature that allows a user to essentially hide people’s names and pictures when they’re doing their search. It surfaces only the qualities that matter most when looking for a candidate. In conjunction with what we’re doing on the AI side of the house, we consider what is best for the customer.
Third is enablement. If, for example, there is an entirely separate process to evaluate whether an AI model passes your responsible AI metrics, then it may make development more difficult because someone may forget this step. Building it as a default step of your model training process, model deployment process, or model evaluation process will make it easier for everyone. So it’s important to fully integrate your AI tools with your regular development process.
Fourth is leadership commitment. It’s important for leaders to be vocal about their support for responsible AI.
Talent strategy to enable AI
Rikki Singh: Should PMs own product strategy and road mapping, or should data scientists?
Dr. Ya Xu: The PM and data scientist should both do it together. In my career, I’ve had technical roles. The best thing I experienced was the opportunity to work with people who are not in my function because they taught me how to be a better PM, a better marketer, and a better designer.
When we are designing AI products, the AI capabilities determine whether your product or your product vision can be successful. My advice is to push those two peas into the same pod early on. That way, it’s easier to capture the value AI can provide. At the same time, it can help the PM, who is thinking about the holistic experience to push those boundaries.
Rikki Singh: What are some sustainable tactics for engineering leaders to attract and retain the right talent?
Dr. Ya Xu: Two tactics have worked well for me. One is hiring people with the right experience and expertise. I think spending time on hiring the right expert matters because they can help you attract, interview, and assess more of the right talent after.
Second, we should hire a learn-it-all versus a know-it-all. The field is changing so fast. We have to have the mentality that skills are evolving, and we have to evolve with them. So having talent who wants to learn is more important than someone who claims to know everything—because they may know everything today, but they won’t know everything tomorrow or a year from today.