Most Australians believe they can identify an AI-driven scam, but new testing suggests their detection skills may not match that confidence, with implications for fraud prevention and insurance exposure. Research commissioned by Commonwealth Bank of Australia (CommBank) found that 89% of respondents felt at least somewhat confident in their ability to spot an AI-generated scam. When asked to distinguish between genuine and AI-generated images, however, participants were correct only 42% of the time, which is below the likelihood of guessing at random.
The findings indicate that older Australians are almost as likely as younger people to be misled. Those aged over 65 were only 6 percentage points less accurate than younger respondents, suggesting that deepfake content can affect all age groups. Despite increasing use of AI tools in fraud, only 42% of Australians surveyed said they were familiar with AI-enhanced or deepfake scams. James Roberts, general manager of group fraud at CommBank, said: “The findings reveal a growing gap between confidence and reality – and that gap is exactly what scammers are looking to exploit as they increasingly turn to AI to target everyday Australians and small businesses.”
Roberts said people should not assume that technological change renders established scam-prevention measures ineffective. “The good news is that the steps that keep people safe don’t need to evolve at the same speed as the technology does. Deepfakes might be new, but the same tried-and-tested habits – slowing down, checking details, and speaking with someone you know and trust, such as a family member, remains your best defence – even against AI-powered scams,” he said.
CommBank has adopted the “Stop. Check. Reject.” framework for responding to suspected scams, including investment fraud, impersonation schemes, altered invoices, romance scams, and business email compromise, whether or not they involve AI. The bank also offers CallerCheck, which lets customers who receive a call from someone claiming to be from CommBank confirm the caller’s identity via a security message sent to the customer’s banking app.
According to Professor Monica Whitty, professor of human factors in cyber security at Monash University, deepfakes work in part because they align with how people typically respond to familiar voices and faces. “Humans tend to trust faces, voices, and familiar people. Deepfakes take advantage of that instinct,” Whitty said. She added that a reluctance to discuss scams can add to that vulnerability. “The data shows that many Australians don’t talk openly about deepfake scams – with only a third discussing AI-generated scams with their relatives or friends. That means fewer opportunities to share warning signs or learn from others’ experiences,” she said.
The research found that 67% of respondents had not discussed AI-generated scams with relatives or friends. While 74% agreed they should establish a safe word with loved ones to confirm identity, only 20% reported actually having one. Roberts said simple verification steps are becoming more important as voice cloning and text impersonation tools become more available. “Scammers can fake voices now, so it’s okay to double-check. In fact, it’s smart,” he said. Whitty encouraged continued information-sharing. “Be vigilant. Educate yourself. And if things look suspicious, talk with others about it,” she said.
The survey showed that 27% of Australians had encountered at least one deepfake scam in the previous year. Among those incidents, 59% involved investment scams, 40% related to business email compromise or payment redirection, and 38% were linked to relationship or romance scams. Examples for individuals include deepfake videos of public figures promoting investment schemes, “Hey Mum/Dad” phishing attempts using cloned voices or texts to create urgency, and online relationships supported by AI-generated images or manipulated video calls.
For small businesses, 41% of owners surveyed said they were familiar with deepfake scams. Half of reported deepfake scam attempts (50%) were delivered via email. Despite this, only 55% of small businesses said they had cross-checked supplier payment details in the previous six months, and 48% said they verify suspicious information. Roberts said internal and household-level discussions are an important protective measure. “Scammers are using AI to create fake investment videos, deepfake celebrities, and even voice and text clones of loved ones, senior executives, and government officials. Talking openly about this technology is one of the easiest ways to help stay ahead of it,” Roberts said. For insurers and brokers, the data points to continued emphasis on controls such as payment verification, change-of-details procedures and staff training around social engineering, particularly for cyber, crime, and social engineering cover.
Roberts said deepfakes sit within a broader scams environment that spans financial services, telecommunications, and digital platforms. “We recognise the impact of scams on Australians and support the Australian government’s Scam Prevention Framework to introduce obligations initially across banks, telcos, and digital platforms. Deepfakes are showing up on social media, messaging platforms, websites, and even through phone calls – and we welcome stronger protections across those industries, as well as banking. Deepfakes are new, but protecting yourself hasn’t changed – and with stronger protections across all channels, we can help keep more Australians safe,” he said.
CommBank’s findings are consistent with Aon’s 2025 Cyber Risk Report, which analysed data from more than 3,200 Aon clients and over 1,400 global cyber events. The report recorded a 29% year-on-year increase in cyber incidents in Asia-Pacific and a 134% rise in incident frequency over four years. In 2024, cyber insurance claims notifications increased by 22%, indicating a larger volume of insured losses. Incidents involving artificial intelligence, particularly deepfake technology, contributed to a 53% year-on-year increase in social engineering attacks. Insurance claims related to social engineering and fraud rose by 233%, reflecting changes in attacker tactics and insured exposures.
For the Australian insurance market, the combination of high public confidence, limited familiarity with deepfake threats, and rising AI-enabled fraud activity suggests ongoing focus on underwriting discipline, assessment of social engineering controls, and claims management across cyber and financial lines portfolios.