New research shows widespread concern in Australia about crimes facilitated by artificial intelligence (AI), with respondents most worried about AI being used to track their location, gain access to their devices or accounts, and impersonate or deceive them in ways that could result in financial loss, embarrassment or harm.
The Australian Institute of Criminology’s (AIC) Statistical Bulletin 51, based on more than 6,000 responses to the 2024 Australian Cybercrime Survey, reported that about half of Australian adults believe AI could be used to harm them, and roughly one in five expect to be victimised by an AI-enabled crime within a year.
Respondents cited risks that AI could be used to monitor their movements, compromise their devices or online accounts, and pose as them to contacts. Many pointed to AI tools that can assist with password cracking, voice and facial mimicry, and identity spoofing. AIC deputy director Rick Brown said the findings are intended to inform ongoing policy work on cyber risk and AI-related safeguards. He described the research as a “timely and much needed contribution to Australia’s national cyber policy conversation,” and said it is designed to ensure that community views inform “future safeguards, regulation, and public education.”
The bulletin found that older Australians were less likely to state that AI-enabled crimes are common but were more likely to worry about being targeted. Parents identified risks involving AI-generated child sexual abuse material and grooming behaviour supported by fake online identities. According to the AIC, perceptions of risk are already influencing how people use AI. Some respondents reported avoiding tools they regard as unsafe, while others may not fully recognise their exposure. The report outlined the need for safeguards around location data and identity verification, clearer guidance on device security and privacy, and broader education about AI-enabled scams and impersonation. These findings may inform how policyholders’ controls over identity authentication, device protection, and customer communications are assessed in underwriting and risk management discussions.
On the threat actor side, research from TrendAI’s Forward-Looking Threat Research Team examines how AI is affecting open-source intelligence (OSINT) and reconnaissance for targeted attacks. Senior threat researchers Numaan Huq and David Sancho said AI has shifted OSINT from a largely manual activity to an automated process that can generate large volumes of target profiles. Public data – including posts, images, and metadata on professional networking platforms – can be processed as machine-readable intelligence, then combined and ranked using widely available tools.
In their analysis, reconnaissance is no longer the main constraint for targeted campaigns. Attackers can generate customised messaging, documents, and media quickly enough to conduct highly tailored phishing or social-engineering efforts at volume, extending an organisation’s attack surface to employees’ external digital footprints. The researchers said defensive strategies should move beyond awareness training alone to include systematic “exposure management” – policies and controls that assume adversaries may have extensive visibility of staff identities, roles, and connections. That approach raises questions about how insured organisations manage executive and employee profiles and how those practices are reflected in cyber risk assessments, particularly for social engineering, business email compromise, and executive impersonation covers.
A separate study from LevelBlue indicates that many chief information security officers (CISOs) report higher confidence in established cyber disciplines than in their readiness for AI-enabled attacks and software supply chain exposures. The “Persona Spotlight: CISO” report found that 60% of respondents rated themselves as highly competent in areas such as cyber resilience, core security operations, and engagement with the wider business. In addition, 61% said their adaptive cybersecurity approach allows their organisations to take greater innovation risks.
Reported preparedness was lower when CISOs considered AI-enabled adversaries and deepfakes. Only 53% said they felt ready to defend against “AI-authorised adversaries,” and 45% expected AI-powered or deepfake attacks to affect their organisations within the next 12 months. The research also identified alignment and governance gaps. More than half of senior executives said they were less likely than a year earlier to treat cybersecurity as a standalone function, but fewer than half of CISOs believed the organisation’s risk appetite is aligned with cybersecurity risk management. Only 37% said cybersecurity budgets are built into projects from the outset.
Read next: CISOs wary of AI threats - report
Respondents cited governance structures as a constraint, with 60% pointing to limited understanding of cyber resilience among governance teams and unclear ownership. While 55% said cybersecurity is increasingly treated as a shared leadership responsibility with defined metrics, only 43% described their organisation as having an effective cybersecurity culture. Software supply chain risk was another focus. Just 31% of CISOs said they view the software supply chain as a potential primary security risk, and only 25% prioritised assigning confidence levels to suppliers to improve third-party visibility. The findings raise questions about whether insured organisations’ stated risk appetites, governance arrangements, and third-party controls are consistent with their AI use and software dependencies, including in the context of cyber and technology E&O wordings.
At the retail and small business level, research commissioned by Commonwealth Bank of Australia (CommBank) suggests a gap between Australians’ confidence in spotting AI-generated scams and their demonstrated detection performance. The study found that 89% of respondents felt at least somewhat confident identifying an AI-driven scam. When tested on their ability to distinguish between genuine and AI-generated images, participants were correct 42% of the time, which is lower than the probability of guessing.
Performance was similar across age groups. Australians over 65 were only 6 percentage points less accurate than younger respondents, indicating that deepfake imagery can mislead a wide range of age cohorts. Despite increased use of AI in fraud attempts, only 42% of those surveyed said they were familiar with AI-enhanced or deepfake scams. The research also found that 67% of respondents had not discussed AI-generated scams with relatives or friends. While 74% agreed they should establish a safe word with loved ones to confirm identity, only 20% reported having one.
The research points to intersecting themes: concern about AI-enabled crime, broader use of AI-driven reconnaissance, uneven organisational readiness, and limited consumer capability to detect synthetic content. On the retail side, exposure to deepfake-enabled scams may influence claims experience for cyber extensions on home and contents policies, standalone personal cyber covers, and fraud-related benefits on banking and card products. For small and medium-sized enterprises, the combination of AI-enabled business email compromise, payment redirection, and executive impersonation underlines the role of controls such as call-back verification, dual authorisation, and documented change-of-details procedures, often tied to crime, cyber, and social engineering coverage conditions.
On the corporate side, growing use of AI in business processes and software supply chains, together with reported governance and culture issues, may affect how underwriters assess aggregation risk, vendor dependencies, incident response capability, and the strength of board oversight. The findings indicate scope to discuss staff digital footprint exposure, AI-specific threat scenarios, third-party risk management, and training that addresses AI-assisted social engineering. The data also points to a continuing role for the sector in emphasising practical verification steps and open discussion about scams, alongside the evolution of cyber coverages designed to respond to AI-related incidents.