With Nano Banana Pro, deepfakes have hit a new milestone – what does it mean for corporate risk?

Next-gen deepfakes challenge corporate clients and insurers as synthetic media becomes indistinguishable

With Nano Banana Pro, deepfakes have hit a new milestone – what does it mean for corporate risk?

Insurance News

By

Google’s launch of its latest AI image generator, capable of photorealistic people, flawless typography, and richly detailed scenes, has intensified fears that humanity is on the cusp of being unable to distinguish real from synthetic media.

Early testers report that Nano Banana Pro can render complex infographics, recreate celebrity likenesses with minimal prompting, and overcome longstanding weaknesses in text generation. For many experts, the leap in quality marks a new inflection point in the deepfake arms race.

That shift, said Daniel Woods (pictured), head of cyber underwriting research at Coalition, is reshaping how organizations must think about authenticity, reputation, and insurance risk.

“Deepfakes will continue to improve in quality almost indefinitely,” Woods told Insurance Business in an interview.

“Humans will always struggle because we can only really rely on heuristics. Two or three years ago, you could look at fingers. Now they can do fingers. So, you start looking at text in the background. Over time, those heuristics are getting harder and harder to spot.”

Deepfakes poised to change the risk landscape

For corporates and businesses, the risk from deepfakes can be significant. Historically, reputational crises stemmed from data breaches. Now, a company’s public audio or video of its CEO, employees or even its manufacturing processes can be weaponized.

“If your CEO has any kind of video or audio content in public, these models need something like 10 seconds of their voice, and they can produce deepfakes,” Woods said. “We do not think it is possible to prevent these attacks in the short term.”

Coalition recently launched a Deepfake Response Endorsement, focused on helping companies react when synthetic media spreads. The coverage does not insure reputational damage itself but funds crisis response: forensic analysis to prove falsity, legal support for takedown requests, and PR guidance for public statements or partner communications.

Ultimately, Woods believes public deepfakes may become manageable through institutional adoption of detection tools. However, he is more concerned about private-channel deepfakes, including impersonation attempts in payment fraud, phishing, and business email compromise, which are already among the most common cyber-insurance claims.

“In the future, the same scam could be executed over Microsoft Teams, where you impersonate a vendor on a video or audio call, or by phone, where you impersonate their voice,” Woods said.

“With public deepfakes, you need a relatively small number of institutions to adopt detection. For private deepfakes, you need everyone who processes transactions to adopt these tools.”

Reliance shifts from human judgment to technical forensics

As visual clues recede in AI-generated media, companies, journalists, and investigators are turning toward technical signals embedded in images and video. Woods sees three buckets of detection emerging.

First are model fingerprints, such as subtle statistical patterns emitted by generative models. Each model has a particular way of encoding images and video that is detectable. Training a detector on a batch of outputs from a given model allows forensic systems to flag future content that resembles that signature.

Second is metadata analysis, often overlooked by deepfake creators. Forensic firms still uncover mismatches between timestamps, location tags, descriptions and what appears in the visual content.

“Those kinds of manual investigations are still very useful,” Woods said.

The third, and most promising in Woods' view, is cryptographic watermarking. Devices could sign an image or video at the moment of capture, creating a provenance trail that proves authenticity. He predicted that corporate leaders would start to do this in the future.

“This flips the problem. Instead of trying to prove something is fake, the burden shifts to proving something is real,” he said.

With Nano Banana Pro’s ability to reproduce faces and text with near-perfect fidelity, institutional norms, especially in media, must evolve quickly.

“When a journalist reports on an image or video, what do they need to do to verify it?” Woods asked. “Should journalists run everything through deepfake detection tools? Should they only report on statements or videos that have been signed with a watermark? We’re in a strange transition period.”

Forensics are keeping pace… for now

Despite the alarming leap in model capability, Woods is cautiously optimistic about detection technologies. Vendors report accuracy rates above 95%, and generative-AI companies could do more, including embedding optional signatures or obvious watermarks.

But as models like Nano Banana Pro continue to advance, the gap between what humans can discern and what machines can fabricate will only widen. Corporate leaders will soon need formal verification procedures for any media they publish or rely upon.

And as deepfakes spread from public platforms into private communications, society must prepare for a world where the question is no longer whether an image looks real, but whether it can be verified at all.

Keep up with the latest news and events

Join our mailing list, it’s free!