How deepfakes are reshaping cyber insurance, and exposing policy blind spots

AI-powered impersonation is testing the limits of social engineering cover, crime exclusions, and broker readiness

How deepfakes are reshaping cyber insurance, and exposing policy blind spots

Cyber

By Bryony Garlick

The line between belief and deception is being redrawn in real time. For cyber insurance experts like Ethan Godlieb (pictured), associate partner – head of cyber, tech & fintech at Consilium Insurance Brokers, deepfakes aren’t just the next digital risk. They’re a full-spectrum stress test for insurance controls, coverage structures, and client readiness. 

“Seeing is no longer believing,” Godlieb said. “It’s becoming harder and harder to be confident that the person you’re talking to is who they say they are.” 

That uncertainty, heightened by AI-powered voice and video cloning, has direct implications for cyber risk coverage, crime claims, and how brokers advise clients. Especially when millions of pounds can vanish in minutes. 

A new face on social engineering 

While deepfakes are often viewed as cutting-edge threats, Godlieb places them squarely within a broader continuum of social engineering. 

Social engineering has long relied on deception via email - whether broad phishing campaigns, targeted spear-phishing, or so-called “whaling,” which impersonates senior executives to authorise payments. But as Godlieb said, “Now we’re entering an era where attackers aren’t just spoofing what you read but what you hear and see.” 

The result is impersonation on a scale and level of realism that older controls weren’t built to catch. High-value funds transfer fraud, particularly via video and voice impersonation, is emerging as a key concern, alongside increasingly convincing business email compromises. 

A widely cited case in Hong Kong saw a company executive join a video call that appeared to include several board members, all confirming a request to transfer millions of dollars. The call lasted around 20 minutes, and everything seemed legitimate, until it was revealed that every participant on the call had been fabricated using deepfake technology. Godlieb pointed to the incident as a vivid example of how impersonation tactics have evolved, and how convincingly they can unfold in real time. 

While cases like this are still rare in the UK mid-market, Godlieb believes that will change. “The tech is improving quickly. A few years ago, deepfakes were novelty apps. Now you’re watching videos on social media and wondering, ‘Did that actually happen?’ That wouldn’t have been the case two or three years ago.” 

The pass-the-parcel problem: crime loss in cyber wrapping 

Deepfakes sit awkwardly at the intersection of cyber and crime insurance, creating both coverage ambiguity and claims friction. 

“Cyber might include social engineering cover, but it’s usually sublimated, capped at, say, £250,000, even if the policy limit is £5 million,” Godlieb said. “It’s really seen as a crime loss.” 

That duality has led to what he calls the “pass-the-parcel” problem. “The wrapping is cyber, but the present inside is crime,” he said. “Yes, it looks like a cyberattack - AI, deepfake, impersonation, but the financial loss is a crime event.” 

Rather than call for hybrid products, Godlieb argues for clear delineation and collaboration between policies. “The crime market is set up to handle theft-of-funds losses, as long as there’s a financial loss. Cyber complements that. What matters is having a dovetailed programme where it’s clear which policy responds when.” 

That clarity is particularly vital for financial institutions, which often don’t get any crime cover under cyber at all. “For everyone else, there may be a sublimit under cyber, but if the exposure is significant, it’s worth having a full crime policy,” he said. 

Policy blind spots and insurer expectations 

Where policies fall short, according to Godlieb, is typically in two places: missing cover altogether or relying on sublimits that don’t match the risk. 

“Cyber policies are great at responding to data breaches, network interruption and classic cyber events,” he said. “But when it’s a fraud loss triggered by human error, someone voluntarily transferring money after a fake video call, it’s fundamentally a crime loss.” 

As for controls, insurers are increasingly specific about what they expect. “Dual approval for changes in payment details and significant transfers is a must. Segregation of duties. Voice and video should be treated as untrusted - just seeing someone on Teams isn’t good enough anymore,” he said. 

Proposal forms now often ask for proof of training in deepfakes, video phishing (“vishing”), and layered access controls for finance. But training, Godlieb noted, is playing catch-up. 

While phishing awareness is now common across most organisations, training rarely extends to video-based deception. As Godlieb put it: “I haven’t yet seen training aimed at ‘How do you spot a fake video?’” He added that many people remain unaware that deepfakes even exist, let alone how convincing they can be. 

From isolated incidents to integrated programmes 

For brokers, deepfake risk is more than a new talking point, it’s a reason to rethink how entire insurance programmes are built. 

“This isn’t a standalone product conversation,” Godlieb said. “It’s a portfolio one. It’s about how policies interact - cyber, crime, D&O, tech E&O, PI. If you’re not mapping loss scenarios across the full programme, you’re not giving the best advice.” 

That includes thinking about how claims play out in practice. “We often want cyber and PI with the same insurer to avoid finger-pointing in a claim,” he said. The same logic applies to cyber and crime, especially in fast-moving deepfake losses. 

Ultimately, brokers must help clients prepare not just for known threats, but for how those threats evolve. “Most cyberattacks happen late on a Friday, when people are tired,” Godlieb said. “If an attacker sends out thousands of deepfakes, they only need one person to believe it. That’s the reality we’re dealing with now.” 

For insurance to keep pace, brokers will need to treat deepfakes not as a product feature but as a programme-wide risk. One that demands tighter controls, clearer policy boundaries, and a more joined-up conversation across cyber, crime, and beyond. 

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!