When AI goes quietly wrong: Why ‘silent AI’ is the next big insurance shock

As businesses rush to embed AI into everything from underwriting to client service, insurers and brokers could be blindsided by the first “silent AI” mega-claim – a loss arising from AI that nobody thought to price, underwrite or exclude

When AI goes quietly wrong: Why ‘silent AI’ is the next big insurance shock

Transformation

By Daniel Wood

In the United States, the Mata v Avianca case – where New York lawyers were sanctioned after filing a brief packed with AI‑generated, “fake” case citations – is already a cautionary tale of what happens when professionals lean on AI without understanding its limits. It was not an insurance dispute, but it exposed the kind of AI‑driven professional error that could migrate to Australia and into claims under professional indemnity (PI) and other liability policies.

For Nicholas Blackmore (pictured), partner at Kennedys in Melbourne and head of the firm’s APAC cyber risk group, that kind of scenario is the canary in the coal mine. He believes the market is still waiting for its first major “silent AI” test case – and when it comes, it could be brutal. For that reason, he suggested brokers in Australia start a thorough AI fact finding process with clients now.

Waiting for the first big AI coverage fight

Blackmore said the litigation hasn’t landed yet but the ingredients for a major dispute are clearly in place. His first concern is a coverage battle where AI is at the heart of the facts, but nowhere in the wording.

“It may be that we get a large case, a big dispute about whether a particular scenario is covered by PI or product liability when it was a case of an AI tool going wrong,” he said.

That is classic “silent” exposure: policies never designed with AI in mind being asked to respond to losses directly caused or amplified by AI tools. In PI, for example, many wordings cover “any liability that arises as a result of the provision of services,” supported by a definition of “services” that predates the generative AI boom.

We are already seeing real‑world hints of the problem. Blackmore pointed to cases of law firms using AI research tools that fabricate citations and generate poor‑quality court documents. The client, understandably, is angry and may sue for negligence. From an insurance perspective, that looks like a straightforward professional services claim: the lawyer used an AI tool as part of providing legal services and the service was defective.

“That type of scenario may simply be covered under the policy, but the insurer will then have a problem if they did not price that risk in,” said Blackmore. “The response may be that insurers start pricing their policies differently, asking more detailed questions about AI, or seeking to exclude certain AI-related risks from some policies by adding exclusions.”

The danger for carriers is that the first big test case crystallises not just one claim, but an entire class of unpriced exposure – triggering rapid tightening of wordings and difficult conversations with brokers and clients.

AI governance and the broker’s new fact‑find

Brokers, said Blackmore, need to start with a simple but often unanswered question: how, exactly, is the client using AI?

“Brokers should be trying to get a clear picture of what their clients are doing in the AI space and what controls and governance they have around AI tools in their business,” he said.

That means going well beyond whether the company has a formal AI strategy. Staff may already be pasting customer data into public chatbots, using AI‑driven document tools or relying on generative models for advice and drafting.

“One related concept in the industry is ‘shadow AI’, which is where employees use AI tools without the organisation’s knowledge or supervision,” Blackmore said. “That is clearly a recipe for chaos.”

For brokers, this is an opportunity to differentiate. A practical AI fact‑find is starting to look essential:

  • Are you using AI at all? If not, that can be evidenced to insurers, potentially supporting better pricing.
  • If you are using AI, what tools are in play, and in which business processes?
  • What data is fed into those tools – especially personal, confidential or regulated data?
  • What governance exists: policies, approvals, human oversight and documentation?
  • How is vendor and model risk assessed when a new AI tool is rolled out?

Protecting clients from broad, blunt exclusions

Blackmore’s team is working with clients on these questions. If an insured can demonstrate this kind of discipline, he said, “from an insurer’s perspective that will hopefully improve the risk profile.”

In other words, AI governance is fast becoming part of insurability. Brokers who can surface and evidence that governance will be better placed to secure cover – and to argue against broad, blunt AI exclusions. The challenge is that AI adoption isn’t creeping into business – it is exploding. Boards, vendors and investors are all pushing in the same direction: more AI, faster. Risk awareness, by contrast, is playing catch‑up.

“The extent to which people understand the risks and limitations of AI is lagging a long way behind,” said Blackmore. While there is “a bit more caution emerging,” it is nowhere near matching the speed of deployment.

“AI will be everywhere, and we will not be keeping up in terms of understanding the risks and limitations”

That asymmetry is exactly where silent AI losses are likely to arise: AI embedded everywhere, but risk thinking and policy design still rooted in a pre‑AI world. Blackmore put it starkly: “My biggest concern for the next year or two is that AI will be everywhere, and we will not be keeping up in terms of understanding the risks and limitations.”

For insurers and brokers in Australia and New Zealand, before the first big test case lands – whether in a courtroom in New York or the Federal Court in Sydney – the market needs to make AI visible. That means interrogating how AI is used in insureds’ businesses, building and evidencing governance, revisiting PI and liability wordings for unintended AI exposure and resisting the temptation to rely on blanket exclusions as the only line of defence. The first silent AI mega‑claim may still be a hypothetical but as adoption surges and risk understanding lags, the window to prepare – and to help clients prepare – is closing fast.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!