Insurers and reinsurers are beginning to underwrite losses tied specifically to errors made by artificial‑intelligence screening tools, in a development that reflects growing demand for cover focused on model risk and the operational exposures of AI.
The product on offer covers so‑called “excess errors” attributable to AI models used by US mortgage lenders - losses that occur if borrowers default more often than the model predicts because the tool makes mistakes, rather than broader drivers such as interest‑rate moves, macroeconomic shocks or regulatory change. That narrow framing reflects insurers’ preference to isolate technology‑related model failure risk from conventional underwriting and market risk.
Start‑up MKIII, which provides AI screening for credit unions and community banks, has bundled insurance against model errors into its service. MKIII, which has 11 employees, said it has referred about 5,000 new US customers this year. Co‑founder Bryan Adler told the Financial Times: “It’s all done by the machine,” adding that MKIII retains a single person who “spends three hours a day manually reviewing some [borderline] cases” for creditworthiness. Adler said the insurance offering has helped lenders reduce capital they must hold against the loans: “The main value is the capital relief,” he said.
Global reinsurance groups are participating in the market. Munich Re has signed up to directly cover the risks of an AI model misfiring, and Greenlight Re- an alternative re/insurer - has provided capacity alongside other backers. Armilla, an AI insurance start‑up, has evaluated the performance of MKIII’s software and helped obtain cover from reinsurers including Greenlight.
The insurance pays out if borrower defaults exceed the model’s predictions specifically due to “excess errors” with the AI tool, Armilla’s chief executive Karthik Ramakrishnan said. “If borrowers defaulted more frequently compared with the model’s predictions, specifically because of ‘excess errors’ with the AI tool… insurance would pay out,” he told the FT.
Reinsurers say they can underwrite probabilistic model failure but that pricing must reflect the expected error rate. Michael von Gablenz, an AI specialist at Munich Re, told the FT that AI models are inherently probabilistic and will make mistakes: “The best AI model will always have a probability of making mistakes or hallucinating - it cannot be technically avoided, it’s in the nature of those models, because they are probabilistic,” he said. He added, “we’re comfortable covering a broad range of error rates, from very low to high — they will be reflected in the premium.”
MKIII said lenders using its platform collectively paid millions of dollars in insurance premiums to obtain tens of millions of dollars of insurance cover - protection that, the start‑up said, would allow the lenders to write hundreds of millions of dollars of home loans. The cover is deliberately narrow: it focuses on model error risk rather than broader economic or regulatory exposures.
Some insurers remain cautious about extending AI cover. The article notes that certain insurers have sought permission from US regulators to exclude AI‑related losses from existing policies, reflecting concern that technology‑related claims could create complex, correlated exposures and a wave of liabilities that are difficult to quantify under traditional wordings.
For insurers and brokers, the products illustrate a move toward more granular, parametric‑style cover designs tied to measurable model performance rather than traditional indemnity triggers.
The developments also raise broader questions for product design, aggregation risk and regulatory engagement - areas insurers will need to manage as machine‑led underwriting and automated decision tools become more widely used across financial services and other sectors.