Artificial intelligence failures tied to widely used models could trigger simultaneous losses across industries and jurisdictions, raising aggregation concerns for reinsurers, according to a new Gallagher Re report.
The white paper, “Smart Systems, Blind Spots: Rethinking Insurance for the AI Era,” states that flaws in widely adopted AI systems, including foundation models, may generate correlated losses across sectors and geographies. Unlike traditional catastrophe events, these failures can spread rapidly, creating accumulation risk that is difficult to model using existing approaches.
This dynamic introduces a form of systemic exposure that differs from natural catastrophe risk, where geographic and temporal boundaries are more defined. In contrast, AI-driven failures may propagate through shared use of widely adopted AI models across industries.
The report identifies gaps in how current insurance products respond to AI-related risks. It states that liabilities linked to hallucinated outputs, discriminatory model behavior, model drift, and contaminated training data are often not addressed under cyber, technology errors and omissions (E&O), product liability, or commercial general liability policies.
Legal and regulatory developments are also shifting accountability toward AI deployers. Contractual arrangements frequently limit vendor liability, leaving organizations that use AI systems exposed to financial losses when failures occur.
“AI is transforming the way businesses operate, but it also introduces a new class of risks that traditional insurance policies were never designed to address,” said Ed Pocock, global head of cyber security at Gallagher Re. “This paper provides a roadmap for insurers, brokers, and enterprises to navigate these challenges and develop solutions that reflect the realities of AI-driven liabilities.”
The findings come at a time when insurers are integrating AI across underwriting, claims, and customer operations. Research published by McKinsey & Company notes that AI is being used to improve underwriting accuracy, automate claims handling, and enhance customer interactions, while also introducing new operational and governance challenges.
The same research indicates that AI adoption is becoming central to how insurers operate, with applications ranging from risk assessment to pricing and policy servicing. This broader use increases reliance on AI systems, which in turn raises the potential for correlated failures across portfolios.
The Association of British Insurers has also pointed to AI-enabled risks, including fraud and operational disruption, alongside other pressures such as climate-related losses and regulatory complexity. These developments indicate that insurers are managing risks that existing models and systems were not designed to address.
Gallagher Re states that insurers have started introducing standalone AI insurance products and endorsements to address these exposures. These offerings aim to define coverage boundaries and respond to risks arising from both generative and non-generative AI systems.
The report also outlines considerations for structuring insurance products that align with AI failure modes, while working alongside existing cyber, casualty, and E&O coverage. It refers to governance measures and contractual approaches that can support clearer allocation of risk.
The report’s focus on aggregation risk aligns with concerns about how AI exposures may accumulate across ceded portfolios. Freddie Scarratt, deputy global head of InsurTech at Gallagher Re, said: “The rapid adoption of AI has outpaced the insurance market's ability to respond to the risks it creates. By working together, insurers, reinsurers, and enterprises can close the protection gap and ensure that AI adoption is underpinned by robust risk management and insurance solutions.”
Gallagher Re said the framework is intended to support insurers, brokers, reinsurers, and risk managers in assessing how AI-related risks differ from traditional exposures and how these risks may develop across portfolios.