Although the relentless rise of generative AI (gen AI) has many fearing for the loss of their livelihoods to machines, it may actually end up benefiting management and workers alike. Ironically, this most advanced of technologies doesn’t require technical skills to use, and instead puts a premium on very human qualities such as experience, judgment, and wisdom. In this episode of the At the Edge podcast, former Fortune CEO and editor Alan Murray speaks with McKinsey’s Lareina Yee about why gen AI marks a turning point in human history, one that must be carefully managed for the good of business and society.
An edited transcript of the discussion follows. For more conversations on cutting-edge technology, follow the series on your preferred podcast platform.
2023: The year gen AI blew minds
Lareina Yee: Alan, thank you so much for joining. In Fortune’s CEO Daily, you wrote about the importance of generative AI and stated, “History will remember 2023 as a year of AI.” What did you see in those early days? What got you more excited about this technological change compared to others?
Alan Murray: Lareina, thank you for having me on. We all knew AI had been around for a long time before that, but what blew my mind was, “Wait a minute. These things can now create letters, plans, poems, literature, and pictures.” That was just a whole new way of thinking about the power of AI. And I clearly wasn’t alone. It exploded the imaginations of millions and millions of people.
Lareina Yee: In those early days, just a month after ChatGPT was released, how did you start using the technology? What fascinated you personally?
Alan Murray: This is embarrassing for me to admit, but you could sit and have a conversation with it. I was just playing around with it in the early days and asked, “Hey, what do you think of Fortune Media?” I wouldn’t say it came up with necessarily brilliant insights, but it gets you thinking about the things you should be thinking about.
Lareina Yee: Over the course of the year, the technology’s changed a lot. Are you finding that you’re using it in different ways now?
Alan Murray: The main thing that has changed from those very early days is the data—remember, ChatGPT was initially trained on a set of data that was two years old by the time of its launch. For somebody in the news business, that’s a pretty serious limitation. I use Gemini a lot, and I like the fact that it ties into the Google search engine if I want the citations. If I’m looking for news or the latest information, I find it’s now much more useful at that sort of thing. But that is not to downplay the downsides, the biggest of which is the tendency to hallucinate.
From consumers to the enterprise in record time
Lareina Yee: Before we get into the hallucinations, we saw that it very quickly moved from a personal to a corporate experience and became a top priority for boards and CEOs. Tell us about that.
Alan Murray: In my experience, this has never happened before. The idea used to be to get the consumer hooked, and then the BYOB movement would get it into the enterprise. So it took a long time to move from consumer adoption into the enterprise. Generative AI took less than two months. We’d never seen anything move into the enterprise that quickly.
Lareina Yee: Why do you think that is?
Alan Murray: Because it moved from the CTO [chief technology officer] and the CIO [chief information officer] to the CEO. And because every CEO could do what I did and say, “Wait a minute. I can use this, and it’s giving me interesting results. My imagination is running wild, thinking of ways I could use this technology inside my company.”
Subscribe to the At the Edge podcast
The four horsemen of gen AI risk
Lareina Yee: I agree with you. It’s so easy to use. You don’t need any special training. You don’t need to be a data scientist. You can be anyone, speak any language, and use it. But when we look at companies, I would say that a lot of people are still in pilots.
Alan Murray: I’m seeing the same thing—and the four horsemen of risk that comes along with it. First, there is a data risk, a fear of, “Do we have proprietary data that someone is going to leak out through this model and make accessible to the broader world?”
Then there’s the intellectual property risk that has companies asking themselves, “Is there some intellectual property somewhere in this model that’s going to cause someone to come after us someday and claim that we built part of our business on their property and demand a return?”
And then there’s the biggest risk, the one I’ve already mentioned: hallucinations. It’s simply not relatable. I asked it to write my biography, and it cited me as the author of a book that I not only didn’t write but that doesn’t exist. That kind of stuff is a little bit scary.
Number four is bias. Can we be sure that there are not biases baked into the data the model was trained on that will continue to be displayed going forward? How do we make use of this incredibly powerful technology, protect our data, make sure we’re not exposing ourselves to intellectual property risk, and make sure it’s accurate?
The problem is that people are going to get lazy and just let it go. They’re not going to double-check everything. What’s the point of having the technology if you feel you have to double-check every part of it? So I still think there are a lot of issues driving caution within organizations that have to be sorted out.
Two steps forward, one step back
Lareina Yee: We have a conundrum. On the one hand, we’ve got great excitement, and the easy ability to understand the technology, which is moving very fast. But we’re hesitating because it’s not ready. So how do you think CEOs are managing through those two things?
Alan Murray: No technology adoption proceeds in a straight line. There is always a little bit of stepping back, taking stock, and figuring out how you deal with the problems. That includes putting the processes in place to protect company data, checking the intellectual property, and using technologies like retrieval-augmented generation [RAG] to reduce the risk of hallucinations.
But that kind of thing takes time, and it slows the process down a little bit. I think we’re in one of those periods of, let’s just take a pause, take a breath, and then figure out how we can go forward.
Early-adopter use cases
Lareina Yee: Given your experience with Fortune’s Brainstorm AI conferences and spending so much time with global leaders, what are some of the use cases that you do think are ready now, despite the risks and concerns?
Alan Murray: I’ve heard some pretty amazing stories of how it’s being used by marketing operations to create and recraft compelling messages or even personalize messages for particular audiences. So a lot of people in the marketing area were early adopters. It was instantly embraced by coders and developers too, because a lot of code gets written over and over and over again. So now you can just call upon your copilot.
Lareina Yee: It’s something that happens over a decade, or two decades. We’ve looked at it closely, and there’s about a $4.4 trillion value in productivity.
Alan Murray: That’s huge.
Lareina Yee: It is huge. And in a world where we’re all looking for growth, you can’t afford to ignore this technology. But if we’re at the very beginning, and if we see such enormous potential, is it a step too far to say that this is the Fourth Industrial Revolution, or sort of the next printing press event for humanity? Is that hyperbole, Alan?
Alan Murray: I don’t think it is. When you think about how pervasive this technology may be ten years from now, and how much of our lives we are going to spend chatting with generative algorithms, it can blow your mind.
Mass-layoff fears misplaced
Lareina Yee: When we’ve looked at other automation technologies, perhaps not as profound as this one, the transition of jobs has been a pretty serious miss in some cases, or just something that we’ve been—to be generous—clumsy about. How do we think about reskilling, if that’s even the right term?
Alan Murray: I think we know from history that the notion that everybody’s job is going to be challenged and therefore we’re going to have massive unemployment is to be questioned. First of all, you look at the numbers. We still have a talent shortage. And the demographics are making the talent shortage even worse. I don’t think that’s going to change. People were worrying for years about how the mechanization of agriculture would put people out of work. It’s early days, so I could be wrong about this. But the interesting thing about this technology is that it’s not highly technical.
What you need is judgment and maybe some sectoral knowledge to be able to use it well. It calls for much more human skills. Maybe we’ll go back to liberal arts degrees. And remember, it wasn’t that long ago that Businessweek ran a cover story declaring that everybody needs to learn how to code.
But now, the code’s writing itself, thank you very much. So we don’t need everyone learning how to code. But we do need people with experience, judgment, and wisdom, so they can ask the right questions, prepare the right prompts, and know when the algorithms are spewing out nonsense.
There is some early research that I’m sure you’re aware of, Lareina, that says—unlike the past two decades—this technology could actually reduce inequality among workers rather than increase it, because you often see the biggest productivity gains among less skilled and qualified workers. I think that’s pretty interesting.
Given the demographic challenges that many countries, including the US, face, the ability of generative AI to harness the wisdom and experience of older workers in a productive way is pretty exciting. We could be bringing a lot of people back into the workforce to fill high-end jobs, where their decades of experience and knowledge and wisdom are at a premium.
Rethinking the value of a liberal arts education
Lareina Yee: Alan, the Khanmigo AI tutor and teaching assistant is very much what you described at the start of this conversation. Your first months playing with the technology were in conversation. And that’s what it is, a conversation, but it’s not with an executive or CEO. Instead, it’s with a high schooler or college student. So I think if we were to push this a little bit further, you could completely reimagine education.
Alan Murray: Totally. Generative AI creates an interactive means of learning, which we all know is much more effective. It’s like Khanmigo: you talk to it. It gives you an answer. You ask it questions. It responds. You can go back and forth and exchange ideas. It’s just much more effective as a learning tool than most of what’s out there today.
Lareina Yee: Alan, if you could rewind and go back to school, would you study different things? You brought this up a couple minutes ago, and I’ll put it in a starker way. There’s been a lot written by the press about the death of the English major. I’m personally rooting for English majors. So I’m curious, would you study something different, given what you know now?
Alan Murray: I was an English major.
Lareina Yee: So there you go.
Alan Murray: You’re calling me out. I did study economics later on. I’ve always wished I had studied more history with English. I don’t regret not taking more technical courses. I think a lot of schools have, over the course of the past decade, moved away from liberal arts and toward coding and computer science. That’s been fine, and maybe it was right for the times, but we need to rethink that.
What people are really going to need in the future are two things. One is interpersonal skills. Yes, it’s great that the technology can do all these things, but getting people aligned around how to use the technology and what the goal is relies on those fundamental skills of human organization and leadership. You’ve got to get those skills right, so I would definitely put a high premium on them.
The second thing is wisdom and knowledge. How do you teach wisdom? I don’t know. But I think English isn’t a bad way of doing it. History’s not a bad way of doing it either.
The importance of adapting to the pace of technological change
Lareina Yee: Alan, you counsel and convene so many global leaders. How do you think it changes how the person with the corner office thinks about learning and education?
Alan Murray: The need there is intense. And generative AI is a technology that most of the Fortune 500 CEOs—if not all of them—had never heard of in November 2022. By January 2023, their boards were telling them it was going to disrupt and remake the whole company.
I’ve spent a lot of time moderating conversations between executives on technology issues, and I’ve noticed a very clear pattern when discussing this new technology that’s going to transform the enterprise. Within 15 to 25 minutes, the conversation has changed and we’re not talking about technology anymore—we’re talking about people.
That the real challenge is not, “How do you get the technology to do what you want it to do?” The real challenge is, “How do you get people and human-based organizations to adapt to the pace of technological change?” I think that’s an area that needs more attention.
Technology-driven stakeholder capitalism
Lareina Yee: Something you’ve written a lot about as well is stakeholder capitalism. How do you think about generative AI in a stakeholder capitalism rubric?
Alan Murray: I think the two trends, the technology revolution and the movement toward stakeholder capitalism, are very closely related. There was a really interesting study done about five years ago that compared the balance sheets of the Fortune 500 companies 50 years ago with those today, and it asked: Where is the value coming from according to these balance sheets?
In the 1970s, you find that more than 80 percent of the value came from physical stuff. Do you have a plant? Do you have equipment? It’s kind of understandable that there was such a big emphasis on shareholder value 50 years ago, because the shareholders provided the money you needed to buy the physical stuff, which in turn created the value.
If you do the same exercise today with the balance sheets of Fortune 500 companies, more than 85 percent of the value comes from intangibles. It’s not physical stuff. It’s intellectual property. It’s the brand value with customers. It’s all the things that are much less tied to physical stuff and much more tied to human beings.
That’s a profound change that’s been driven by technology. And it means that you have to manage these companies in a very different way, because the goal is not, “How do I get enough money to buy all the stuff?” The goal is, “How do I motivate, engage, attract, and retain the best talent?” Because that’s where the value comes from.
A lot of what’s happening in so-called stakeholder capitalism, where you’re paying more attention to the planet, to the people you employ, and societal inequality, is really a recognition that the 21st-century corporation is much more human than the 20th-century corporation. So you have to think differently about it.
Also, the pace of change has become so fast that the 20th-century way of managing doesn’t work anymore. The CEO’s job and the C-suite’s job have become much less about telling people what to do and much more about inspiring them, engaging them, giving them a clear purpose, giving them a North Star, and giving them the guardrails they need to operate. All of that requires you to deal much more with people and with their whole selves, with all of their hopes, dreams, and aspirations.
Lareina Yee: I love the frame that it’s not about the technology; it’s about the people. The technology is in service of what? It’s to enhance humanity at its sort of highest aspiration. So I think that there are things that only humans can do, that there’s some power in interacting human to human. But the pace of the technology—let’s think about some of these and really try to show empathy. That’s a human emotion.
Alan Murray: But they’re not people. Human beings are social animals. When you’re with a group of people who you admire or trust, there’s a lot going on there besides just the exchange of words. There are things going on that are chemical.
Drawing lessons from the success of nuclear arms control
Lareina Yee: As we sit at the precipice of the Fourth Industrial Revolution, what’s ahead of us, and what lessons do you think we could take from history into this next generation?
Alan Murray: I think we could start by taking comfort that this is not going to eliminate all human activities. There’s still going to be plenty of stuff that human beings will need to do to make society work. So I don’t worry about massive joblessness.
I do worry about people being able to adapt quickly enough to make use of this technology. I read Mustafa Suleyman’s book, The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma, and I think it’s very good. On the one hand, he talked about the kind of hopelessness of containing or controlling these big waves of technology.
On the other hand, he pointed to nuclear arms control and what it has accomplished over the course of the past 75 years to reduce the risk of nuclear war. And the results are pretty stunning. So I think he raises an interesting question, which is, “Is there a lesson there that the world can learn from to mitigate some of the negative consequences of technology?”
And by the way, Lareina, I should say I’ve been very positive and upbeat in this conversation, because I am positive and upbeat generally about the technology. But I do think about the danger of generative AI polluting the information ecosystem, particularly around politics and elections, which are already polluted enough. Some people say, “Gee, it couldn’t get any worse.” But it could get worse. The ability of generative AI tools to proliferate and personalize political messages that could have real effect on electoral outcomes is very scary.
And more broadly, the danger that generative AI and deepfakes could further undermine trust in institutions—which has already been deeply undermined by various developments over the past 20 years—scares me a lot.
Business leaders have an important role to play
Lareina Yee: Alan, one of the most remarkable things about journalists is the ability to ask the right questions. As you think about government leaders, what are the two or three most important questions for them to consider in the age of AI?
Alan Murray: Question one is: How do we create a saner political system so people of goodwill of various political persuasions can have a reasonable conversation about the very real problems that face the world? This is an instance where technology is feeding the problem.
If you think about what this technology enables, you can imagine a campaign using incredibly sophisticated targeting technology based on massive population data. They know what your hot-button issues are and what it would take to persuade you to vote one way or another.
Lareina Yee: That is the biggest threat. Taking it down to the everyday person, it seems to me, actually raises the bar in terms of our ability to be critical consumers of information.
Alan Murray: We have to be critical consumers of information. So, what are some commonsense rules that can help mitigate the risk? Maybe we need to think about requiring watermarks on everything that’s AI generated to make it clear to people and be fully transparent.
We also live in a society where business leaders, maybe somewhat surprisingly, have emerged as the most trusted authorities. More trusted than journalists, more trusted than members of Congress, and more trusted than NGOs [nongovernmental organizations]. Is there a role the business community can play in helping us address these problems?
One of the interesting developments of the last decade was large companies saying, “We have to play a role in mitigating climate change. We have to be active.” And, as you know, there’s been a 300 percent increase in companies that have net-zero plans that are not just 2050 plans but 2030 plans.
That’s a big change. They’re taking it very seriously. And they’re taking it seriously because climate is an existential risk to the survival of their company. This political dysfunction is also an existential risk to the survival of these companies. Is there a more constructive role companies can play in mitigating political dysfunction as well?
Lareina Yee: At the very least, being a voice at the table. I think that’s part of the challenge of an expanded level of responsibility. Whether you face it or not as a trusted institution and leadership, it’s a question of more than profitability.
So, Alan, I did a little bit of research and learned you were a Luce Scholar, something that we share. You spent your year working for the Japan Economic Journal in Tokyo. If you were a Luce Scholar in 2024, how would you spend your year now? Where would you go? What would you do?
Alan Murray: Since we’re confined to Asia, I might go to Vietnam because of its history and the role it’s playing in taking over some of the manufacturing from China.
Lareina Yee: So you’re going to Vietnam. You’re packing your suitcase. Behind you are a whole bunch of hardcover books. What’s on your book list these days?
Alan Murray: Anything good about AI. As I said, I read Mustafa Suleyman’s book, and I want to read Fei-Fei Li’s book, The Worlds I See. I had an interesting conversation with someone the other night who told me I should be reading Anthony Trollope’s The Way We Live Now. And I’m not exactly sure why, but I’m going to take him up on that because I have a little time on my hands. I’m an eclectic reader and read lots of different things.
Lareina Yee: Alan, thank you so much.
Alan Murray: Thank you.