Nearly two-thirds of EMEA businesses admit they are only "somewhat prepared" for AI-linked cyber exposures, even as adoption of the technology accelerates across the region, a poll conducted by Aon found.
The survey of 75 organisations revealed a widening gap between the pace at which companies are deploying AI tools and the maturity of the risk frameworks in place to manage them. Just 18.5% of respondents said they had assessed risks inclusive of AI-related exposures, while 51.8% had measured general cyber risk without an explicit focus on AI.
A further 28.3% had not undertaken any recent risk quantification at all.
Those figures align with broader findings from Aon's 2025 Global Risk Management Survey, which polled nearly 3,000 executives across 63 countries. Despite cyber ranking as the top current risk globally, only 13% of respondents said they had quantified their exposure – a gap that Aon said contributes to widespread underinsurance.
Brent Rieth (pictured above), Aon's global cyber leader, said organisations across EMEA recognise the significance of AI and cybersecurity but remain at an early stage of readiness.
“They need to strengthen AI-specific threat modelling, integrate emerging exposures into formal risk discussions and upskill teams to enhance detection and response," Rieth said.
The urgency is not theoretical. Data published by CrowdStrike in early 2025 showed AI-supported campaigns accounted for over 80% of social engineering attacks, with vishing incidents growing 442% between the first and second halves of 2024. Average breakout time fell to 48 minutes.
The World Economic Forum's Global Cybersecurity Outlook 2026 report found that cyber-enabled fraud had overtaken ransomware as the top concern for CEOs worldwide, with 73% of respondents saying they were directly affected in 2025.
The same report noted that concerns about generative AI data leaks (34%) now outweigh fears about adversarial AI capabilities (29%) – a reversal from 2025, when adversarial use topped the list at 47%.
A PwC survey released in late 2025 painted a similar picture: only about half of respondents described their organisations as very capable of withstanding attacks on common vulnerabilities, while just 6% said they were prepared across all of them.
Industry bodies have begun to define what robust AI cyber governance entails. The US National Institute of Standards and Technology released a preliminary draft of its Cyber AI Profile in December 2025, layering AI-specific priorities onto the existing Cybersecurity Framework 2.0 across three focus areas: securing AI systems, AI-enabled defence, and countering AI-enabled attacks.
Law firm Crowell & Moring said the profile "has the potential to become a de facto benchmark for regulators."
The SANS Institute's draft Critical AI Security Guidelines, also published in 2025, recommend maintaining an AI Bill of Materials to document supply chain dependencies, enforcing strict access controls under least-privilege principles, and aligning with frameworks such as MITRE ATLAS.
David Molony, head of cyber solutions EMEA for Aon, warned that AI is raising the capabilities of threat actors. "Businesses must act quickly to embed AI technology-specific controls and modelling or risk leaving critical technology assets vulnerable and exposed," Molony said.