Artificial intelligence may be transforming underwriting and claims, but for insurers, it’s also introducing a fast-expanding set of risks – many of which are not yet clearly accounted for in policy language. Speaking at the National Insurance Conference of Canada (NICC), Michael Berger (pictured centre right), head of AI insurance at Munich Re, warned that insurers must start treating AI as a distinct exposure, not a hidden extension of traditional coverage.
“The most fundamental risk we see is essentially the correctness of AI and AI output,” Berger said. “However, there are many more AI risks.”
He cited research from the Massachusetts Institute of Technology cataloguing more than 700 distinct AI-related risks – a sign, he said, that exposures already extend across standard lines of business.
Berger said the insurance industry is beginning to confront the question of “silent AI” – similar to how “silent cyber” once forced underwriters to clarify whether policies unintentionally covered cyber events.
The question now is how much AI risk the companies are covering in their existing products, and how much they want to cover, he said.
So far, he said, the market has taken two main approaches. One is defensive – filing AI-related exclusions under general liability, umbrella, and excess lines, as seen recently in the United States. The other is affirmative, with carriers rolling out dedicated AI products to cover algorithmic errors and related exposures.
“We’ve seen a mixture of reactions,” Berger said. Over time, it will become much clearer which AI risks the companies want to cover within existing programs, which ones they should exclude, and which ones they insure via standalone AI insurance, he added.
As Munich Re built its AI underwriting capabilities, Berger said the focus was on hiring specialists who could quantify risks that traditional actuarial models were never designed to address.
When you look at probabilistic risks like AI output errors or IP infringement from generative models, standard actuarial assumptions like the law of large numbers and central limit theory don’t hold, he said.
To tackle that problem, Berger said Munich Re hired PhD researchers in mathematics and statistics directly from universities to develop new quantification methods. The team also collaborated with Oxford, Stanford, and Berkeley to refine those models – work that has been published in academic journals and shared with the broader industry.
“These are not cybersecurity risks,” Berger said. “You need expertise similar to what tech companies use to build AI models – people who understand how to measure the probability of hallucination or output error.”
One of the biggest systemic concerns, Berger noted, is aggregation risk stemming from the industry’s dependence on foundation models – large, pre-trained systems like GPT and Claude that serve as the base for countless downstream applications.
“Foundation models introduce a common element between different AI models and different companies,” he said. If one model misses an event or makes a certain error, others trained on the same foundation could make the same mistake – creating correlated losses, he added.
That, he said, could pose the same kind of aggregation challenge insurers already face in natural catastrophes or cyber events.
An audience member asked Berger how Munich Re approaches the “agentic AI” question – when systems begin orchestrating and executing tasks autonomously.
“When AI has a real-world impact, it comes with real-world risk,” Berger replied. If a company truly wants to reap the benefits of automation and smarter decision-making, it has to rely on AI models to make decisions on its behalf – and those decisions, he warned, can create consequences for the insureds.
He rejected the notion that insurers must wait for claims data to begin underwriting AI exposures. Instead, he pointed to the data already embedded in AI development itself.
“When we look at AI and generative AI, there’s the train-test paradigm,” he said. “If testing is done in a statistically robust way, we can use that data to infer error distributions for specific models. That allows us to quantify and price risk without waiting for claims to materialize.”