Online Safety tightening forces clearer AI liability boundaries

Proposed changes to the Online Safety Act could redraw AI lines for insurers

Online Safety tightening forces clearer AI liability boundaries

Insurance News

By Bryony Garlick

Proposed amendments to the UK’s online safety regime could accelerate the liability phase of generative AI, raising sharper questions for insurers around regulatory exposure and coverage design.

The Online Safety Act was introduced in 2019, before mainstream generative AI emerged, and passed in 2023, with enforcement only beginning last year. Since then, AI chatbots have embedded rapidly across platforms, exposing the gap between legislative timelines and technological acceleration.

The Government has signalled its intention to close perceived loopholes, including clarifying how one-to-one chatbot interactions are regulated and strengthening data retention obligations in cases involving the death of a child.

While political debate has focused on safety and moderation, the insurance implications are more structural. Neil Beresford, partner at Clyde & Co, said the proposed tightening reflects a “fast-evolving risk landscape”.

“The rise of AI-driven harms, some involving children, creates questions around the adequacy of existing safeguards,” he said. “There is still much to be done to establish the boundaries of the duties of care owed by tech companies in situations where their technologies have been used to cause real-world harm”.

For insurers, those boundaries matter. As AI systems become more autonomous, exposure may be framed as negligence, product liability, regulatory breach or failure of service. If enforcement leads to investigations or financial penalties, the risk may not sit neatly within existing D&O or E&O categories.

Beresford said insurers “will also wish to clarify the legal obligations and types of harm that they wish to cover, ensuring that coverage keeps pace with technological development”.

Cross-border enforcement pressure

Regulatory layering may add further complexity. Jimmy Heaton, head of international D&O and FI at Rokstone, said globally deployed AI products increase the likelihood of overlapping investigations.

“AI products are commonly offered/used on a ‘global’ basis therefore we can expect the list of investigating authorities to increase further, compounded by multiple agencies having input within the same one country e.g. Ofcom and also ICO within the UK – the list of investigative agencies and departments could well increase exponentially,” he said.

He said the existence of loopholes illustrates the lag between innovation and legal adaptation.

“The point that such a loophole as this exists within current UK law shows the divide between the pace of technological development and adaptations of such technology, versus our understanding of its applications within current legal framework,” he said.

For insurers, that divergence may translate into uncertainty around territorial triggers, regulatory reporting obligations and defence cost exposure.

Underwriting and wording resilience

Whether legislative tightening materially changes underwriting remains nuanced. Heaton said the answer may hinge on how underwriters themselves deploy technology in their assessment models.

“Ironically, I think it depends on the underwriter’s own reliance on technology,” he said, noting that many AI firms may currently fall into broad e-traded classifications such as ‘software developer’ or ‘technology company’, supported by similarly generic SIC codes. He questioned whether existing e-trading platforms allow “the level of detail required to sustainably underwrite AI-related risks”.

On policy wordings, his view was similarly measured. “Generally, to an extent I would say E&O and D&O is prepared for autonomous AI output risk but this opinion comes with some caveats,” he said, warning that long-tail classes can take years to reveal their true exposure. “As always with liability insurance and long tail classes - the proof will be in the claims outcomes.”

From a coverage standpoint, he expects divergence rather than uniform reaction, with the market likely to split between those that exclude and those that choose to innovate and “learn with the curve”. Pricing, however, may remain stable. In a competitive market, he argued, the focus should remain on disciplined selection rather than “adding an extra 20/30% to the premium.”

The tightening of the Online Safety regime may not trigger sudden pricing movement. But as statutory duties around AI systems become clearer, insurers face a more fundamental question: whether existing classification, wording architecture and territorial assumptions are aligned with how AI liability is now being defined.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!