Gen AI has rightly seized leaders’ attention. But is it also eclipsing lower-profile digital imperatives? On this episode of The McKinsey Podcast, McKinsey senior partners Rodney Zemmel and Kate Smaje, with global editorial director Lucia Rahilly, talk about ideas leaders risk overlooking with gen AI in the spotlight, and how to ensure your digital initiatives—including gen AI—work in tandem to drive meaningful value.
This transcript has been edited for clarity and length.
The McKinsey Podcast is cohosted by Roberta Fusaro and Lucia Rahilly.
Where to start
Lucia Rahilly: Gen AI has been the shiny new object of the business world, but as your new article suggests, it risks blinding leaders to other digital tools also vital to their organization’s success. How do you see leaders balancing that tendency to go after the glitter with the need to maintain focus on other essential business operations and strategies?
Rodney Zemmel: Generative AI is a very shiny object indeed, which sounds a bit disparaging because it can and is already delivering real value. But it’s the wrong question to ask, “What’s my gen AI strategy?” You’ve got to start with where value comes from and think about how you get value from transforming a domain of your business with technology. Whether generative AI, old-fashioned AI, process digitization, or anything else, that has to be subsidiary to the question of where value is coming from.
Business that’s like your brain
Lucia Rahilly: Your report lays out ten “unsung ideas” on digital and AI, and today we’re going to focus on three of those. Let’s start with the idea that every company will become a neural business. What does this mean?
Rodney Zemmel: There were a couple of amazing scientific breakthroughs last year in our understanding of the brain. At my old lab in Cambridge, we got our first visualization of the connectome, or how all the different neurons fit together—first in a fruit fly brain, and then from a slice of the human brain from a Google team. What we see is this incredible, intricate architecture where everything connects with everything else. And we think this is the new metaphor for business.
The old metaphor for how a business is organized is a tree: hierarchical, with branches that extend from each other. But the trouble with that analogy is that it’s very hard to get connections across the boughs of the tree.
To do most interesting and innovative things in business, you’ve got to get those connections working much more effectively than they do in many rigid hierarchical organizations. This isn’t a new idea. It’s a new take on it, and it speaks to the scale of the connections needed.
But also, you need to make sure the overall common patterns, governance, and organizing structure is beautifully intricate and networked and enables teams across the organization to work with each other and form and reform without complete chaos.
Want to subscribe to The McKinsey Podcast?
Kate Smaje: This is about enabling speed. Could I, as an organization, work at a higher metabolic rate than today? And second, this is about scale. That’s where some of these common patterns come into play, because the greater the reusability, the greater the pattern recognition, the more you’re able to do at scale, as opposed to reinventing the wheel every time.
Lucia Rahilly: Teams have to function autonomously as they’re forming and reforming. Could you say a bit about what autonomy means in this context?
Rodney Zemmel: One way to take this forward is in what we call a product and platform company. You have a central set of platform services and a distributed network of empowered teams that have autonomy and are aligned against a specific business goal, and they draw services from the central platform.
They’re autonomous in that they’re self-contained and working toward a business goal they own. But they’re not autonomous in that they’re working within an overall company framework and set of objectives, there’s a platform team from which they’re expected to draw services, and they follow rules rather than just go create their own.
What we’ve seen in this product and platform model is that it’s early days. But if we look at a set of companies that have adopted it, the top half of companies in terms of maturity had 60 percent greater total shareholder returns than the bottom half of companies.
Lucia Rahilly: Could you give us an example of a company that has successfully implemented this agile neural network approach?
Rodney Zemmel: One of the companies we talked about in our Rewired book, DBS Bank, known as one of the leading banks in digital banking, has really rethought of itself into horizontal cells. As soon as you use a banking example, people say, “Well, that obviously makes sense in a service industry.” But another company that’s in the book, Freeport-McMoRan, has done this in their copper mining operations.
Kate Smaje: One of the ways any company can test this is by asking a simple question: How quickly can you conceive of, build, and launch a new product and service today?
Rodney Zemmel: The acid test is a good question, because it’s hard to find a company these days where a senior leader won’t say, “Yeah, of course we’re working at agile.” And often what that means is they have a technology team working in agile. But it won’t always mean that business and technology are working together properly, and frankly, it rarely means that you’ve got the control functions embedded in those agile pods in the right way.
Lucia Rahilly: Speaking of control functions, what kind of operating model needs to be in place for this kind of neural business to function successfully? And what kind of oversight is necessary for these autonomous teams to function well, including limiting missteps and keeping productivity up in this model?
Rodney Zemmel: Right, and how do you do it at scale? Because again, many companies can do this, but how you do it across dozens to hundreds of teams? That’s the hard part.
First, a lot of it is about talent. You need to put the effort into upskilling and reskilling your talent to be able to work in this model. Then it’s about being super thoughtful on how you staff these teams and how you get your senior leadership team comfortable so they’ll set guardrails and objectives. They’ll participate in reviews at the big milestones, but every decision is not running up the chain to them. It requires a fairly evolved governance framework.
Let’s take data as an example, really figuring out all the data governance rules within your organization because you can see a clear ROI on the next AI use case. If you’re not putting those data governance models and rules in place up front, then you’re going to make it impossible to work in this kind of distributed and scalable model.
Kate Smaje: I ask, “Do I have a set of outcomes that empowered teams are working toward?” This is where it becomes important for management to check in and know how things are going. If you have alignment on the desired team outcomes, then you have transparency into whether we’re there yet. And if not, what’s getting in the way? How do we be better next time? Last, back to this notion of pattern recognition, how do I make sure I’m solving for reusability?
How the best pull away
Lucia Rahilly: Let’s move to another of these ideas, on digital and AI leaders becoming forever transformers. What are some of the new technologies or trends leaders should be on the lookout for now?
Kate Smaje: In some ways, everybody has the technology they love to geek out on, and we’re as guilty of that as the next person. But for me, the magic is less about a single technology or singular trend and more about the combined power of bringing several of these technologies together. It’s only when that really happens—and, by the way, the same was true for generative AI—that you create an opportunity for a breakthrough in creating a new business model, creating a disruption that hadn’t happened before.
But for me, the magic is less about a single technology or singular trend and more about the combined power of bringing several of these technologies together.
Rodney Zemmel: We have an analysis called Digital Quotient or AI Quotient, where we look at how well companies have adopted different digital or AI approaches. In the past two or three years, we’ve seen that industry is no longer destiny. There’s much greater difference within an industry than there is between industries. The most advanced industrial companies are more digitized than the high-tech median, and the least advanced are less digitized than the public-sector median.
What’s behind that is this notion of a forever transformer. You see companies that started on the journey are able to get ahead, keep investing, and, frankly, get further ahead. And you see increasing returns to digital leaders over time as they’re able to pull ahead of others in the industry.
Lucia Rahilly: I have kind of a romantic fascination with quantum. I went through a Carlo Rovelli phase.
Rodney Zemmel: He’s so good.
Lucia Rahilly: He’s so good. I liked to think about quantum entanglement as a romantic construct. But our research shows that some industries stand to gain considerably by applying quantum computing very practically to specific use cases, whereas I tend to think of it as more abstract. What does quantum in practice look like?
Rodney Zemmel: I’m as excited as you are about quantum. In fact, all the conversations we’re having about AI or gen AI today could be about quantum five to ten years from now.
It’s important to emphasize that it is still a science experiment. While the pace of change is amazing, the number of functional qubits, which are the units you need to perform quantum work, that you can get in a quantum computer today is still really small. So it’s still in the research or maybe early development phase.
But if it works, the impact could be absolutely spectacular. Industries being talked about are financial services, pharmaceuticals, chemical and agriculture, automotive, and a range of others. And, essentially, the many hard math problems that would take years to centuries to solve using traditional algorithms can go much, much faster with a quantum algorithm approach. You can solve them in an exponential rather than a linear rate.
So it could affect everything from portfolio construction and performance analysis in financial services to how to design effective catalysis in agriculture—which doesn’t sound that exciting, but if you could find a more efficient way to produce ammonia-based fertilizers, the economic impact on the world would be enormous.
I have a feeling that what goes first will be things that resemble real quantum physics problems. But then over time, anything that’s a complex math problem—whether how to redesign a delivery route for a logistics company or how to best build up layers of carbon fiber to develop a strong material for aerospace—will be massively tractable by quantum computing when it works.
Making gen AI your superpower
Lucia Rahilly: As gen AI gets better and better and employees become increasingly dependent on it for at least some parts of their portfolio, how can organizations identify which roles or tasks will benefit most from gen AI to create a more productive workforce?
Kate Smaje: Our research undoubtedly says there is opportunity for major productivity gains, but they’re pretty hard to realize today. Some of that is because you must have, for any technology breakthrough, a commensurate or an equal and opposite breakthrough on the human side.
How am I going to change the workflow so that I can materially free up time? What am I going to do in terms of learnings, upskilling, reskilling, new career paths that fundamentally reset what humans will do when AI superpowers sit alongside them?
Rodney Zemmel: The gen AI superpower is not how to find a way to save 20 minutes in your day, but how to find a way to make using gen AI your first instinct. We’ve seen it so far in software development. If you just give the tools to a software developer and say, “Here’s the latest,” the developers will each go find the most boring part of what they do and use it to accelerate that. And you’ll get these 5 or 10 percent productivity benefits.
If instead you say, let’s look at the full team and look at a week or month in the life of the software development life cycle and not just, how does developer A or B do their job on an average Tuesday? And you think about how the whole team changes their work, you train people, and you have real measurement for where it’s working better than a human or not—that’s how you build superpowers.
Lucia Rahilly: How are you seeing leaders tackle that learning and training?
Kate Smaje: What I see a lot of is, we’re going to train our organization on tech, on AI. We’re going to teach them what it is, we’re going to explain it, bust the myths, and so on. And that’s important, don’t get me wrong. But it’s one of those necessary but not sufficient things.
The things missing are, for example, to really use well what is fundamentally an assistive technology. You have to know how to use it to get the best out of it. We certainly see investments in things like teaching how to do great prompts. So rather than teaching you about it, teaching you how to use it becomes really important.
The second is, to Rodney’s point, in the day-to-day workflow. It’s about making sure you have incredible critical thinking skills to be able to parse out some of the complex risk and responsible AI usage issues, to think about how hallucination is going to run through this, and what you need to do in the pre- and post-processing of the modeling.
You will probably need to have amazing EQ [emotional quotient] and relational skills, because what the human is going to do—that the tech won’t—may be more on that side. Maybe even, frankly, people will need higher cognitive capacity or curiosity to learn, to keep evolving and iterating. For me, there’s not yet enough focus on what let’s call the nontech skills the human is really going to need in a hybrid intelligence setting.
Lucia Rahilly: Do you see tech professionals as equally in need of upskilling as employees outside the tech sector?
Kate Smaje: It’s both. None of us is immune to the need to keep learning, not least because, in some ways, the pace of change today will never be this slow again. It’s about your ability, even as a tech professional, to understand and be open to the new technologies that will come in.
How do I make sure I’m ready to learn and embrace those? How do I keep getting better at using that technology in my job? As a technologist, how am I going to help get value out of models, not just build great models?
Rodney Zemmel: For the average company, a senior team is going to learn better from seeing what a leading nontech company does than a tech company. What we’ve seen work very successfully is when management teams have done what we call go-and-see visits with other companies, often in industries quite different from their own, where average companies have really applied this and really learned how to transform their businesses with digital and AI.
For the average company, a senior team is going to learn better from seeing what a leading nontech company does than a tech company.
Once you see a “normal” company do it, that brings the power of the technology to life much more than seeing what digital natives can do—which for the average company, is exciting but feels a bit more like a trip to the zoo than something directly relevant to their daily lives.
What lies ahead
Lucia Rahilly: Do you see a risk of employees becoming overly reliant on AI? And do organizations need to take steps to help ensure employees retain their human judgment?
Kate Smaje: In some ways, we can see “reliance on AI” as a pejorative term or as a real positive, in that employees are using AI to do their jobs better, faster, cheaper, at lower risk. The reality is that human judgment will become more important than ever for making sure these models are built responsibly, and more important in making sure the value really comes out of them.
To make sure humans are using AI responsibly, there are at least two things to consider. One is constantly questioning the technology. The second is constantly looking for the “and.” Where does one plus one equal five here, regarding bringing humans plus technology to get a breakthrough that wasn’t otherwise possible? As long as we’re still doing those two things, human judgment will become more important, not less.
Rodney Zemmel: That said, it’s clear that this is going to be better than humans in many cases. I’ll give you a maybe silly example. In tennis, there’s electronic line judging. Wimbledon and the US Open have gone different routes. In Wimbledon they have electronic line judging, but they keep the humans wearing green blazers standing on the sidelines. In the US Open, they’ve gone all electronic. And most people would say the US Open version is working just as well or better. There’s less interruption of play. There’s less back and forth. There are no more obviously wrong calls.
Interestingly, I’m told that the US Open employs as many people as Wimbledon does, but in different jobs. It’s people in the tech control room, and people that are deploying, setting up, and monitoring the technology.
Lucia Rahilly: We recently posted an interview with Reid Hoffman for the At the Edge podcast. He suggested that AI has the potential to develop EQ and soft skills. Any thoughts on how that might affect the AI–human calculus in the workplace?
Kate Smaje: We see this already. Rodney and I often joke about this very small study that was done in the United Kingdom with GPs [general practitioners], where they tested a human versus a bot to see if the patient could tell the difference. The patients pretty much could, in most cases.
Then the patients were asked which they preferred. Staggeringly, most folks preferred the bot. They said, “I felt that it understood my needs better. It was more empathetic. It solved my problem faster.” We shouldn’t underestimate that the technology is already pretty darn good at the qualities we often associate with humans.
Lucia Rahilly: Anything else to call out that might not be top of mind for leaders but should be?
Rodney Zemmel: There’s a question about what the future of the workforce is going to look like. There was a very interesting interview with Garry Kasparov some time ago. He famously was beaten by the IBM chess computer back in the ’90s. He said, “Look, I was the first knowledge worker to lose my job to a computer. And now it’s coming for all of you.”
That’s a bit exaggerated, but that view is clearly out there. There are companies that say, “OK, we no longer need junior people, analysts, people doing routine tasks. We can get away with a workforce or evolve to a workforce that has a very different pyramid shape.”
There are others who say, “Maybe. But this is about the superpowers idea. It means we can make the analysts or the junior people in our company as productive in the future as our most senior people are today, because this is going to take away a lot of the routinized drudgery of what they do and really give them these incredible abilities to create more value.” There’s a view that says the value of the data scientist goes down and the value of the data engineer goes up in the future.
So from a workforce planning standpoint, this is profound. Frankly, the answer doesn’t exist yet. People are going to need some real thinking time and flexibility to evolve what this means for workforce planning.
Kate Smaje: I couldn’t agree more. Your point on flex has another flavor to it as well, in that so much of what we’re really talking about here is constant learning, constant experimentation. And that’s very easy to fund, resource, and allocate time to in good times.
It’s much harder to do when economies or markets turn and companies have to batten down the hatches. There is a real challenge for leaders to figure out: how do I have a more through-cycle mindset for investing and learning for the future when the level of certainty, and therefore the level of ROI prediction, is more challenged? Can I have a plan here that’s flexible enough for sunny times as well as rainy ones?