Insurers sound the alarm on AI voice cloning's growing liability risks

Insureds are warned against a lack of controls amid regulatory uncertainty and emerging lawsuits

Insurers sound the alarm on AI voice cloning's growing liability risks

Transformation

By Gia Snape

As artificial intelligence-powered voice cloning technology rapidly gains traction across marketing, customer service, and digital media, insurers and risk professionals warn that businesses remain underprepared for the legal and operational risks it entails.

While adoption is accelerating in the global AI voice synthesis market, estimated at $3.3 billion, legal frameworks are evolving unevenly, claims activity remains limited, and insurance policies are only beginning to adapt.

“The main concern is that this is a new, emerging risk,” said Erlisa King (pictured), director of product liability at Tokio Marine HCC. “What we’re finding is that our insureds aren’t necessarily prepared, so they may not have the quality controls in place to mitigate this exposure.”

AI voice cloning: Emerging regulatory and litigation exposures

AI voice cloning creates a digital replica of a person's voice using machine learning to mimic their unique pitch, tone, and speech patterns from limited audio, enabling synthetic voices for content, customer service, and accessibility. Commercially, it's used for voiceovers, localized content (e.g., dubbing in other languages), personalized digital assistants, and restoring speech for those with impairments, revolutionizing media production and customer engagement. 

Voice cloning poses distinct challenges compared to other forms of AI, particularly because it intersects with intellectual property, privacy, right of publicity and fraud risks. That complexity is amplified by regulatory uncertainty in the US. California’s recently enacted AI Transparency Act (SB 53) addresses AI accountability more broadly, but leaves voice cloning and biometric voice replication in something of a gray area.

For insurers, that ambiguity raises the likelihood of future litigation.

“We’re seeing AI cloning used across websites, training modules, customer service bots, voicemail campaigns and digital avatars,” King said. “As that expands across platforms, we’re going to see more violations: contract violations, potential fraud and related issues.”

Several high-profile lawsuits already signal where plaintiffs’ attorneys may focus their next efforts. In the case Lovo, Inc., an AI voice generator, two voice actors sued for unauthorized cloning of their voices. They alleged their voices were used beyond the scope of what they had consented to, for commercial voiceover work delivered via Lovo’s platform.

Meanwhile, ElevenLabs, Inc., a high-profile AI voice-cloning company, is facing lawsuits from voice actors alleging unauthorized use of their voices, including misappropriation of likeness and publicity rights.

King said insurers expect these cases to multiply as the technology becomes more mainstream and regulators begin to catch up.

How coverage could evolve to protect against AI voice cloning risks

So far, however, claims activity has been modest. According to King, most incidents seen to date have triggered coverage under advertising injury and personal injury provisions within standard liability policies.

Even so, she said brokers are increasingly advising clients to look beyond traditional coverage. Media liability, cyber liability, errors and omissions, privacy liability, and crisis management or reputational harm coverage are all being discussed as potential tools to address voice cloning exposures.

The reputational stakes, in particular, can be high. “Once these claims start to hit, having crisis management coverage becomes important,” King said.

Despite this expanding menu of coverage options, King acknowledged that gaps could emerge as claims volume increases and legal theories become more refined. With limited loss history to rely on, carriers are instead focusing on underwriting discipline and risk controls. That scrutiny may eventually translate into policy sublimits or exclusions specific to AI and biometric technologies.

“I do see (coverage) expanding in the future,” King said. “Where we currently have a cyber sublimit, we may eventually see specific wording addressing AI cloning, imaging and biometrics.”

Sublimits are particularly likely for companies that use voice cloning as a secondary function rather than a core operation, such as manufacturers deploying AI-generated voices for customer service or marketing support. Larger or more AI-centric businesses may be pushed toward standalone cyber or technology-specific policies.

From an underwriting perspective, insurers have started asking how companies intend to use voice cloning, whether they obtain written consent from individuals whose voices are replicated, how biometric data is stored and protected, and what safeguards exist to prevent misuse or unauthorized access, King said. 

Risk management for AI voice cloning

Risk transfer is another critical strategy. Insurers generally prefer that businesses rely on vetted third-party technology providers rather than developing or operating cloning tools internally. “The ideal scenario is that the insured uses a vetted provider with proper contracts, watermarking, tracking and controls,” King noted.

Beyond insurance placement, companies are being urged to strengthen internal governance. That includes formal policies governing AI use, takedown protocols for unauthorized content, clear contract management systems for voice actors, and access to legal and public relations support in case of an incident.

King also emphasized the need for heightened safeguards when minors are involved and for ongoing reviews of the platforms hosting AI-driven voices or avatars.

Ultimately, insurers stress that managing AI voice cloning risk is a shared responsibility. Brokers must educate clients on emerging exposures and coverage developments. Underwriters must refine questions, wording and limits. Insureds must implement robust controls and training.

“We all have a role to play,” King said. “The goal is to keep these exposures under control and, ideally, prevent incidents from becoming claims at all.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!