When AI gets it wrong: insurers examine professional liability risk

Government launches £40 million research lab as insurers examine emerging professional and cyber risk exposures

When AI gets it wrong: insurers examine professional liability risk

Cyber

By Bryony Garlick

The UK government’s launch of a £40 million artificial intelligence research lab aimed at tackling unreliable AI outputs is highlighting a growing concern for insurers: businesses may already be exposing themselves to liability by relying on AI in everyday decision-making.

The new initiative aims to improve the reliability and safety of advanced AI systems, including tackling so-called “hallucinations” - instances where AI models generate inaccurate or fabricated responses.

But as organisations increasingly embed AI tools into routine business activity, insurance professionals say the technology is already creating new exposures across professional indemnity and cyber lines.

“Despite the insurance market being somewhat more cautious when it comes to emerging risk, AI seems to be an exception to the rule with many in the market being quick to welcome AI risk as part and parcel of underwriting UK business risk,” said George Grimshaw, divisional head of cyber and technology at The Clear Group.

“However, with the welcoming of AI risk comes the inevitable and hidden exposures that arise across professional indemnity/E&O and cyber product lines.”

AI adoption creating new professional liability exposures

As AI becomes more widely used across UK businesses, Grimshaw said underwriters are increasingly considering how the technology may affect professional liability risk.

“With the rise of UK business using AI in their day-to-day operations such as developing research, writing reports and even assisting with high level decision making, it has now become an additional exposure for PI underwriters to consider when reviewing a particular risk,” he said.

“With AI being prone to providing unreliable results or indeed ‘hallucinating’ outputs, businesses using the information generated to inform decision making could be exposing themselves to professional liability claims,” Grimshaw said.

He said the issue becomes particularly significant where AI-generated information is relied upon in professional advice.

“Even if the information is generated incorrectly by AI, the lack of human oversight by the business would be seen as negligent when providing advice or services to clients and leave clients on the hook when it comes to any claims for financial loss.”

Data governance questions emerge for cyber insurers

Beyond professional liability, the growing use of AI also raises questions around data governance and privacy exposures, particularly as organisations deploy AI tools that process large volumes of client data.

“With respect to cyber, AI use raises major questions around data governance and privacy breaches which may trigger privacy or regulatory liability rather than classic PI/E&O claims,” Grimshaw said.

“Poor AI governance and internal controls could lead to compliance driven claims or regulatory fines with respect to the handling or indeed mishandling of client data.”

As businesses accelerate their adoption of AI tools, Grimshaw said organisations will need to strengthen oversight around how the technology is deployed.

“Businesses would need to look at mitigating this risk by developing AI governance frameworks within the organisation and creating privacy policies that include wording around the use of AI within their operations,” he said.

The government’s investment in improving AI reliability may help address some of the technology’s limitations, but insurers say governance and human oversight will remain critical as underwriters increasingly examine how businesses deploy AI in client-facing work.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!