Property and casualty (P&C) underwriting can be significantly improved using AI. Insurers’ capacity to accurately estimate, price, and mitigate risks is core to their value to the market, and today, they are contending with a new set of challenges that has disrupted traditional risk approaches. Specifically, climate risk has reached new levels of urgency as ongoing floods, wildfires, and other catastrophes erode the insurability and widen protection gaps of the communities they strike.
So how can AI and climate intelligence help improve property underwriting—and what’s the value proposition? With limited dollars available and numerous climate risk–related priorities, where might initial investments be most beneficial? How does the evolving landscape open the door to excess and surplus (E&S) lines carriers? McKinsey sat down with leaders from ZestyAI, an AI-enabled property and risk analytics platform, to seek answers to these questions. The following roundtable has been edited for clarity.
Understanding climate risk at the property level
McKinsey: How can AI help property insurers, and what are its limits?
Kumar Dhuvur: To me, the carriers that adopt AI first will definitely see a massive benefit. As other carriers cede market share, the forward-thinkers will be able to protect that market share and garner a share of the profits by leveraging models that manage risk better. Obviously, AI is a key enabler. But at the end of the day, it’s all about a better risk-splitting model, and that’s why they’re able to capture that value.
Attila Toth: And that’s what we focus on: value. Not nice-to-have R&D, not a secondary review on risk, not process efficiency enhancements. It’s moving AI to the core of the business, which is underwriting and rating, and avoiding billions of dollars in losses as an industry. It’s about proving and realizing that value.
Kumar Dhuvur: AI will not only help in setting better rates, but it will also help improve segment risk. But the question is, what do you do with those insights? One of the things we have done is provide not only risk scores but also the underlying drivers at the property level for those scores. So if you really want to reduce systemic risk instead of just splitting it, you also have to take steps to bring the risk down for each and every property. And I think this is where AI will be quite helpful.
And then if you can figure out how to actually reduce the risk of a property, how much does it cost? Does it all have to be borne by the policyholder or homeowner, or does the government provide some incentives? I think it now becomes a larger discussion, and I’m hoping that especially for some of these pillars, like hurricanes in Florida and wildfires in California, these discussions could really help bring down the systemic risk instead of just sloping it down.
Attila Toth: As you know, a lot of public and private funding is targeted at climate risk mitigation. If you look at how much money electric utilities in California, for example, are spending to mitigate risk in their own system, you have to ask, “How do you spend those dollars most efficiently?” AI has a very big role to play in understanding the structure-specific vulnerability of critical infrastructure and that of homes and businesses depending on such infrastructure.
In the case of P&C insurance, it’s very important to sequence the use of AI in a way that the highest-value but least-intrusive efforts come first. For example, we may look at an entire portfolio and suggest some shaping before plugging into core systems tomorrow—because we all know that making such a sudden switch is challenging. And when you want to sequence value, you put the most invasive option—let’s call it open-heart surgery—as late in the process as possible. Value first and open-heart surgery last. Unfortunately, it’s not always feasible to create a lot of value without open-heart surgery, meaning major process reengineering and major IT integration. But there is a lot we can do up front by just tweaking some underwriting rules, providing batch data transfer for portfolio reviews, structuring the reinsurance contract, and so forth. And for that, at the beginning, they might not require a lot of IT integration.
McKinsey: How are you seeing the industry evolve in terms of being able to have interpretable AI solutions and solutions that you can go back five years into the past to justify exactly why certain decisions were made? Where’s the industry on that learning curve?
Attila Toth: This was a very important point we had to prove when we sought regulatory approval for one of our models. We had to show that this is not a black-box model. We had to show that the model is making predictions based on real data, and even though we are not assigning weights, the outcome is explainable, given certain inputs and given how the model is basically prioritizing those inputs.
That’s the reason we have decided to make the model fully transparent and not just give a determination. As Kumar explained, the model also provides so-called reason codes. And those reason codes are logged into the system of our customers. So we can say, “We are giving you a wildfire score of eight because you have 70 percent overhanging vegetation, you have a wood shake roof, and you are on a 15-degree slope.” The former two, you can change. The last one, you cannot. And then when renewal season comes, the insurance company can say, “Dear homeowner, have you made those recommended changes to mitigate your risk?” So that’s how we convinced the California Department of Insurance [CDI] that our AI model provides a fully transparent view into risk and mitigation. And of course, we had to do a lot of out-of-sample validation to get to this point. This is how we ensured the model is properly designed and is fully compliant with the latest actuarial standards.
But what truly convinced them is the reason codes. They could simply look at an image and see the reason codes associated with it and the next steps that lead to mitigation. That’s how this has gotten through as part of approved rate filings.
McKinsey: If we segment AI adoption in insurance into a few categories—innovators, early adopters, and then the traditional chasm to get to the majority that are still in early stages—where is the industry overall? Are we still prechasm, or are we postchasm as we’re starting to get that early majority?
Attila Toth: I would say prechasm. Early adopters are committing, while many others are busy kicking the tires.
Sebastian Kasza: I would say people are starting to take the leap across the chasm a bit, partly because of the threat of adverse selection in places where broader adoption and better underwriting performance are coming from players using AI solutions. In markets like California, for example, other players are going to start feeling the burden of adverse selection and will want to jump across that chasm.
Transitioning toward an excess market
McKinsey: One of the thorniest aspects of insurance right now is the migration of the market from an admitted market to an E&S market.1 That gives the players in that market a lot more flexibility. AI has a big role to play because the only way you can navigate something like this is by having the best predictive intelligence capabilities. How do you see this playing out?
Kumar Dhuvur: The regulatory landscape is a factor in places where we are seeing this shrinkage of the admitted market and then the E&S market filling the gap. Rate adequacy is a challenge in several of these markets, and that’s all driven by how long regulators take to approve the rates. Also, many of the regulators demand the rates be substantiated with backward-looking—but not necessarily forward-looking—data. And that’s the paradigm, backward-looking data, which doesn’t always take into account the increasing climate event frequency and severity we’re seeing. Still, incorporating forward-looking data is not an easy sell unless there is some kind of crisis—such as carriers leaving a state.
So I think all of that adds to the stress of carriers saying, “Hey, I’m done with the admitted market. I’m going to walk away from this state.” And we are seeing that unfold in Florida with all those regulatory issues that have happened over the past decade or so.
McKinsey: Do you consider this E&S transition a crisis?
Attila Toth: I think the E&S transition is rather a consequence of the crisis. What’s unfolding now in Florida and what could easily happen in California: that’s the crisis. And part of the crisis, and I think even the DOIs [departments of insurance] would agree here, is the traditional risk models that spread risk across large regions versus having a laser-focus on the individual property. So in case of the traditional model, for example, everything east of I-95 in Florida might be considered all the same risk, when, in reality, it’s not.
All that said, crisis drives action. Look at what the CDI is doing in California by requesting property-specific mitigation discounts in rating plans of all admitted carriers. How can you do that efficiently? With AI.
You can’t send inspectors to look under everybody’s deck and accurately measure overhanging vegetation on ten million homes. You need models at scale, and you cannot use stochastic models, which were designed for portfolio-level risk assessment. You have to use fine-resolution, property-specific models for that. And once you have that data from running computer vision models on aerial and satellite imagery, you have to turn it into more predictive models trained on the right amount of loss history. And that’s where we have seen an opening. I believe that’s where the future has to go: risk assessment has to be rooted in property-specific insights, and it has to be forward looking. It can’t be just stochastic simulation.
I believe that’s where the future has to go: risk assessment has to be rooted in property-specific insights, and it has to be forward looking.
McKinsey: How do we think about community-based insurance or other product innovations, such as parametric sitting on top of your typical policies to try to create more of the right incentive around mitigation? Any other thoughts on product innovation or new risk-transfer solutions that AI might be able to enable?
Kumar Dhuvur: Parametric, which you already mentioned, is still at a pretty early stage. It hasn’t really gotten that much adoption because it’s not fundamentally solving the actual need for insurance. People want a payout when they have a big loss. People don’t want to get a tiny payout just because an event happened. So it doesn’t get to the heart of the problem, but it definitely has a role to play. Whenever you’re able to build models that are accurately sloping risk, you are then indicating through that mechanism to the policyholders what the real level of risk is for that particular object they’re insuring.
McKinsey: If AI maturity, technologically speaking, is on an exponential curve and you look three years in the future, what are we going to see?
Kumar Dhuvur: This is where you will see a collision between an exponentially improving product versus a pretty static or maybe linearly improving regulatory landscape and IT landscape. I think you can build ten times better products, but if you can only bring them to the market at the pace at which regulators would approve them, it’s still going to be a challenge.
Attila Toth: The technology is already mature enough with AI, and there is unique data available at our fingertips to drive 20 to 30 times improvement. The most important thing is how fast a—by design—risk-averse insurance industry can truly embrace technology at its core. While insurers may have embraced a bot era, in which they’re automating processes with AI, such as uploading mobile inspection photos, they have yet to realize the full potential of technology in underwriting. I know the technology is out there and yes, E&S may adopt it faster. Still, a smaller base, fast growth. Those hurdles that Kumar has mentioned for admitted business will persist for some time.
McKinsey: There’s a world where parts of the US—California, Florida—and potentially other countries, based on current capabilities, become uninsurable. Can AI prevent that?
Attila Toth: I believe AI and other adjacent technologies are going to increase reinsurance capacity and, therefore, they’re going to increase primary insurance capacity in the market over time, and that is going to be driven by the level of comfort insurers and reinsurers have in those models and the capital they’re willing to allocate against that. But it will continue to be a measured process.
I also believe climate change is not cheap and that there will be multiple constituents, homeowners, business owners, and local governments who need to bear the increasing cost of climate change. It would be foolish to believe insurance is going to just price for it all adequately.
The primary hurdles to AI in US property insurance
McKinsey: We started with this notion of—if you prove the value, you get past some of the inherent blockages or reluctance. In our industry, which is heavily regulated, how much of the brakes are being applied through or via regulation? What’s the role of that?
Attila Toth: I would put regulatory as the second hurdle behind IT. An outdated IT stack is the biggest one in my opinion. Enlightened CEOs would want to go much faster. The top executives know where the industry is going, and they know predictive models based on AI are better than the 30-year-old models they have been using. They know that, right? But these leaders are really bogged down by the current state of their IT infrastructure.
And then regulatory is also a challenge. The California Department of Insurance is one of the leading regulators in the country, and we have gotten the first AI model approved as part of a rate filing. And it was a three-year process. It was an arduous, very difficult process. All that said, they said yes to a gradient-boosted machine [GBM] and not a GLM [generalized linear model]. They hired their own independent actuaries, and they said yes to it. This became the first AI model and the second wildfire risk model underpinning an approved rate filing in California history. The other is a 1991 vintage model that hasn’t changed much since then.
So there’s hope. Even the biggest skeptics eventually come around. I believe regulatory support is a near-term hurdle, but I think it will fizzle away as more and more DOIs are letting such models in their approved rate filings. I think this will eventually get easier, while the IT challenges will persist for a while.
The third challenge I would mention is the talent gap represented by an aging workforce in insurance. Who is going to execute on a multiyear program that is looking at modernizing risk selection, rating, reinsurance, and operating-expense reductions? Who is that next generation of leadership at these large insurance companies who will execute on a major rebuild like this?
Sebastian Kasza: Regulatory bodies are offered incentives to create capacity, avoid an insurance crisis, and ensure their citizens are protected. Regulators’ approval of AI tools and having capacity opened up because of that helps protect more families and avoids having to go to insurers as a last resort. The more the use of AI opens up capacity for the primary insurance market, the more you’re going to start to see that paradigm shift.