AI coverage gaps widen as reinsurers play catchup, Lockton Re warns

Policies weren't built for probabilistic systems – and the mismatch is creating real exposure

AI coverage gaps widen as reinsurers play catchup, Lockton Re warns

Reinsurance News

By Kenneth Araullo

Re/insurers face a widening gap between what their policies intend to cover and what they actually cover as artificial intelligence systems move into core business operations, according to a new report from Lockton Re.

The report was produced in collaboration with Lockton International and Armilla AI, the world's only managing general agent exclusively focused on AI insurance. Armilla operates as a Lloyd's of London Coverholder with products backed by Chaucer Group, Axis Capital, Swiss Re, and Greenlight Re.

The report maps AI-related exposures across key commercial classes and identifies areas where coverage may be silent, fragmented, or misaligned with how AI failures occur in practice.

Oliver Brew (pictured above), co-author and head of Lockton Re's Cyber Centre of Excellence, said AI differs fundamentally from traditional software due to its probabilistic nature and capacity to synthesize large volumes of data. These characteristics make AI systems prone to errors with implications across all sectors.

"The current lexicon and frameworks for insurance products and risk categories were not designed with these systems in mind and are increasingly misaligned with how AI-related losses occur," Brew said.

Some insurers have begun responding. Munich Re and Greenlight Re are now underwriting losses tied specifically to errors made by AI screening tools. Munich Re's Michael von Gablenz noted that "the best AI model will always have a probability of making mistakes or hallucinating – it cannot be technically avoided."

According to researchers at George Washington University, AI incidents have spurred over 150 lawsuits in the US in the last five years.

Systemic risk looms

The report also addresses systemic risk from shared AI infrastructure, common foundation models, and correlated model behavior. A European Systemic Risk Board study warns that concentration among a small number of AI providers creates single points of failure, while widespread use of similar models can lead to correlated exposures.

Pete Nicoletti, global CISO at Check Point, recently warned that a single vulnerability in a widely used foundation model could cascade into simultaneous failures affecting thousands of organizations – blurring the line between a cyber event and an uninsurable systemic catastrophe.

Baiju Devani, CTO and co-founder of Armilla AI, said the challenge is not whether AI will create systemic risk events, but when—and whether underwriting practices can keep pace.

The report illustrates this with a scenario involving an AI chatbot that generates incorrect warranty commitments without any system breach. Such losses fall outside traditional cyber and liability triggers while still creating material financial exposure.

Karthik Ramakrishnan, CEO of Armilla AI, said silence in policy language creates uncertainty when claims arise. Both insurers and policyholders would benefit from greater clarity on how AI-related risks are addressed.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!