From social media to systemic risk? Addictive design lawsuits put insurers on watch

Claims outlook hinges on expansion beyond Big Tech and evolving legal definitions

From social media to systemic risk? Addictive design lawsuits put insurers on watch

Professional Risks

By Bryony Garlick

A wave of US lawsuits targeting social media platforms over “addictive design” is beginning to test how far liability for digital harm can stretch and whether insurers are facing the early stages of a new claims trend.

The question is not just whether these cases succeed, but whether they signal the emergence of a scalable source of liability.

Recent verdicts against Meta and YouTube centre on allegations that engagement driven features, including those tied to appearance and image based content, contribute to compulsive use and downstream psychological harm.

Some of the most contested claims focus on appearance driven features such as filters and image led content, where claimants argue platforms reinforce behaviours linked to body image and mental health.

While litigation remains at an early stage, the framing marks a shift away from content moderation toward product design as the source of harm, raising more fundamental questions for insurers.

Expanding risk

According to Adam Grossman, director and head of casualty emerging risk at Moody’s, the issue is not any single feature but the broader category of “addictive software design” and how it evolves.

That definition matters because it determines how far claims can spread. Grossman said, “if the scope of what is labelled ‘addictive software design’ expands to include products beyond those currently in litigation, that could increase the number of plaintiffs and therefore overall claims activity.”

That expansion is already underway. More than 4,000 cases have been filed across a widening range of sectors, suggesting the risk is no longer confined to social media alone.

While its analysis does not isolate specific features such as beauty filters, these fall within the same broader exposure where claimants argue design choices reinforce harmful behaviours linked to mental health and self perception.

Scalable but not systemic

The immediate question is whether this develops into a meaningful liability class or remains contained within a limited group of defendants. Early indications point to scalable but not yet systemic risk.

Jimmy Heaton, head of international D&O and financial institutions at Rokstone, said the current wave of litigation remains focused on corporate entities rather than individuals, noting that “the lawsuits, for now at least, are being brought against the entities by aggrieved parents and carers, meaning individual directors/officers of the corporations are not named and thus the current lawsuits are not likely to trigger a typical D&O (Side A&B policy).”

That position may not hold. It is “not unforeseeable that such litigation may eventually name individual directors/officers as the situation develops,” he said, pointing to potential exposure if claims begin to target board level decision making.

Even so, he does not yet view the risk as systemic. “US lawsuits, both in respect of defence costs and any eventual settlement award, are never small numbers and therefore the risk to D&O insurers is potentially scalable, yes. However, I do not believe this to be a systemic exposure.”

For now, the concentration of claims against a relatively small number of major technology platforms limits broader impact, though the potential for litigation to extend across the wider digital ecosystem remains a longer term consideration.

Coverage questions

While claims may take time to scale, the more immediate pressure may lie in how policies respond.

Rosehana Amin, partner at Clyde & Co, said the ruling adds to “a growing global focus on the potential harms linked to digital platform design, and its allegedly addictive qualities particularly for young people.”

For insurers, she said, it raises “emerging questions around whether alleged harms constitute or is the result of intentional conduct,” an issue likely to shape future coverage discussions.

That question of intent could prove decisive. As Heaton noted, if a claimant alleges intentional addictive design “intentionally formulated to cause harm, perhaps this may trigger an ‘Illegal/Deliberate Acts’ type exclusion.”

Lawrence Fine, management liability coverage leader at WTW US, said the verdicts could also encourage further litigation beyond the largest platforms, with “more lawyers” likely to pursue claims against companies using algorithms to drive user engagement.

He added that such claims would “likely be covered by E&O/PI policies,” while warning brokers to watch for emerging exclusions referencing “addiction” or “algorithm.”

As regulation tightens and standards evolve, Amin said insurers and policyholders will need to stay alert to how these developments interact with liability and policy wording, adding that it will also be important to see whether the ruling is upheld on appeal.

Despite growing litigation, insurers are not yet adjusting underwriting or pricing. Heaton said it is “too early to make any amendments to our approach” and that the market will continue to monitor claims trends and early case outcomes.

Grossman said the longer term concern lies in correlation rather than individual outcomes, noting that “severity and aggregation are always a concern when considering correlated risks.”

Whether addictive design becomes a meaningful driver of claims will depend on how far courts extend liability and whether the definition of harm broadens beyond a small group of high profile defendants.

For now, the risk remains emerging. But if courts continue to expand how harm is defined, insurers may find themselves exposed to a class of liability that is both harder to quantify and broader than initially anticipated.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!