AI tools are driving new fraud patterns in New Zealand, changing how scams are carried out across voice, video, web, and messaging channels and affecting the country’s fraud and cyber risk exposure. Cybersecurity firm Norton reports that artificial intelligence and deepfake technology are increasingly being used to modify long-standing scam methods rather than create entirely new ones, with criminals applying the tools to impersonation, phishing, romance fraud, and investment schemes that target both individuals and organisations.
The company has identified five main AI-enabled scam types in 2025:
“AI tools are advancing at lightning speed, making everyday life more efficient and creative. But scammers are always close behind, weaponizing the same technology to trick, manipulate, and steal,” said Michal Salát, a threat research expert at Norton, as reported by Security Brief.
Norton’s data indicates that hundreds of thousands of AI-generated scam sites have appeared globally this year. In New Zealand, NCSC figures show direct scam and fraud losses of $5.7 million in the most recent quarter. For the insurance sector, the mix of steady incident volumes and more persuasive scam mechanisms is relevant for cyber, crime, and professional liability portfolios.
Norton notes that voice cloning is now more commonly used by criminals, as widely available tools can recreate a person’s voice from only a short audio sample. Offenders then call targets while posing as a relative, colleague, or representative of a bank or other institution, often presenting the contact as an urgent problem that requires fast financial or account action. Citing recent information from BNZ, Norton said voice cloning is now regarded as one of the main AI-related scam concerns in New Zealand. The bank has warned that callers may be able to closely reproduce the voices of trusted individuals during fraudulent interactions, weakening the reliability of informal voice-based checks.
At an organisational level, Norton reports that traditional business email compromise is developing into a multi-channel threat that combines spoofed email with AI-generated audio and, in some cases, synthetic video. Attackers collect public recordings from speeches, earnings calls, and media interviews to train models that imitate the voices of senior executives.
Norton referenced a reported incident involving advertising group WPP, where scammers allegedly used a cloned CEO voice during a video-style call to seek credentials and authorisation for fund movements. While the attempt did not result in a confirmed major loss, Norton said it shows how a combination of email, voice, and video can make fraudulent instructions harder for staff to challenge. For underwriters and risk managers, these trends raise practical questions about how organisations confirm payment instructions, manage executive impersonation risk, and document control gaps in the event of a claim.
Norton’s research also points to an increase in phishing sites built with AI-based website tools. Criminals prompt these systems to imitate the look and structure of banks, delivery providers, and major technology brands, including layouts, branding, and customer support features that resemble legitimate channels. According to Norton, New Zealand has recorded a 416% increase in web skimming attempts this year, in which malicious code is inserted into checkout pages to capture card details and billing information. The firm has observed hundreds of new malicious AI-generated websites appearing each day worldwide, often relying on small variations in URLs and brand names to mislead users who do not closely check addresses.
In the consumer and SME segment, romance and friendship scams are being reshaped by AI chatbots and deepfake content. Norton said chatbots can sustain ongoing, consistent conversations that help fraudsters maintain online relationships over weeks or months. Deepfake videos or heavily edited images may then be used as supposed proof of identity.
In New Zealand, Norton reports that AI-driven romance and sextortion scams increasingly rely on personalised details and manipulated images. Avast researchers, cited by Norton, found that sextortion scams in New Zealand rose by 137% in early 2025, with threat actors using AI-generated deepfake material and messages that reference data from earlier breaches. Victims are told that explicit content will be released unless they pay, with accurate personal information used to reinforce the threat. For insurers, these patterns intersect with identity theft, privacy, financial loss, and mental harm exposures, and may create follow-on issues where compromised accounts, business systems, or funds are brought into the incident.
These targeted scams form part of a wider pattern of cyber activity in New Zealand, reflected in the latest NCSC incident statistics. The NCSC’s Cyber Security Insights report for the period from April 1 to June 30, 2025, recorded 1,315 cyber security incidents. Scams and fraud remained the largest category with 514 reports, while phishing and credential harvesting accounted for 374 incidents.
Direct financial losses reported to the NCSC totalled $5.7 million for the quarter, down from $7.8 million in the first quarter of 2025. Overall incident volumes decreased by 3% over the same period. However, a limited number of higher-value events continued to account for most of the loss: incidents involving losses of $10,000 or more represented $5.3 million, or 94% of total reported loss, across 50 cases.
Of the 1,315 incidents, 56 were escalated for specialist technical support because they were considered to have potential national significance. The remaining 1,259 incidents were handled via the NCSC’s general triage process and were largely reported by individuals and businesses.