AI is the next cyber: Why insurers may carve out standalone cover

“Silent AI” is quickly becoming the next portfolio blind spot—prompting exclusions, tougher underwriting questions and the early shape of a standalone market

AI is the next cyber: Why insurers may carve out standalone cover

Transformation

By Daniel Wood

Commercial insurance is running into a familiar problem: a fast-adopted technology is spreading across organisations quicker than policy language can keep up. In practice, that creates “silent” exposure - risk that is arguably sitting inside multiple lines but was never explicitly priced, modelled or even asked about at placement. Cyber went through that cycle; AI is now well into the same one.

For brokers, the immediate tension is client expectation versus contractual reality. Many insureds still assume AI-related incidents will be treated like any other professional error, data event, or operational mishap - until a claim tests definitions, causation and exclusions. For insurers, the issue is accumulation: if many insureds are deploying similar tools in similar ways, a single defect, dependency or legal theory can scale into correlated loss.

That is why conversations with brokers and underwriters - and also industry reports - suggest the market’s currenty direction is to limit the ambiguity, tighten the underwriting, and, where necessary, move the exposure into a product designed to address it. Some reports suggest that some major insurers are already moving to exclude AI-related risks from corporate insurance policies amid concern about unpredictable, potentially very large claims. At the same time, Lloyd’s players have begun backing targeted cover intended to respond to AI tool failures - an early marker of how standalone “affirmative AI” products may develop.

Eric Lowenstein (pictured), CEO of Tego, a Sydney-based specialist healthcare and medical indemnity underwriting agency, has watched this AI cover issue playing out.

While speaking to IB a the recent UAC Market Exchange in Sydney, Lowenstein’s point was not that AI equals cyber in peril mechanics but that the market behaviour has similarities. New exposures tend to be covered broadly by default, then carved out as the loss picture and legal theories mature and finally rebuilt as dedicated cover once insurers decide the risk needs its own pricing, controls and limits.

From broad ambiguity to explicit exclusions

Lowenstein said that while he is not yet seeing a widespread wave of AI exclusions everywhere, some insurers overseas are beginning to add them. That could matter because exclusions rarely stay niche. Once an exclusion becomes standardised, it can spread quickly through underwriting guidelines, broker negotiations and renewal templates - often faster than insureds realise.

There are now some concrete signs of that standardisation effort. Industry reporting indicates Verisk has developed general liability exclusionary forms for generative AI exposures with a January 2026 edition date, explicitly positioned as giving insurers the ability to “generally exclude” this emerging exposure.  

For brokers, the practical implication is a new kind of coverage gap - one that doesn’t sit neatly in a single line. An AI-enabled incident can involve alleged misrepresentation (advertising injury), professional services failure (PI/E&O), privacy issues (cyber), or management oversight (D&O). If exclusions arrive in more than one of those towers at the same time, insureds can find themselves with a multi-policy “no man’s land” precisely when an AI-related dispute escalates.

Lowenstein’s view is that this is where the cyber analogy becomes instructive. Cyber started as something many policies inadvertently picked up; once claims and aggregation fears grew, cyber was pushed out of traditional wordings and consolidated into standalone cyber policies. In his telling, AI is heading toward the same end state.

“We’re already looking at a standalone AI insurance product,” he said. Lowenstein argued that as markets get uncomfortable, standalone products — or DIC/DIL-style solutions designed to drop down where underlying policies retreat — will start to emerge.

The first standalone products are already taking shape

Even as some markets pull back, others are building “affirmative” cover intended to respond to AI-specific failure modes. The Financial Times has reported on Lloyd’s-backed coverage aimed at losses caused by errors or malfunctions in AI tools, including chatbots, with triggers tied to performance degradation rather than one-off mistakes.

More specifically, Armilla has described a partnership-led, standalone third-party liability product with Chaucer addressing “mechanical underperformance” in AI systems and models, with coverage examples including hallucinations and model drift. Lloyd’s has also profiled Armilla AI as a Lloyd’s-aligned MGA/coverholder focused on AI liability insurance, explicitly naming risks such as algorithmic errors, drift and generative AI hallucinations.

This is the market beginning to do what it eventually did for cyber: define the peril, specify the triggers and create underwriting discipline around controls and governance. That discipline is the bridge between “we don’t want to cover this” and “we will cover this but under conditions we understand.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!