From script kiddies to deepfakes: AI supercharges cyber risk

Criminals are using AI to drive a 600% surge in cyberattack infrastructure, with threats multiplying in volume, speed, and variety

From script kiddies to deepfakes: AI supercharges cyber risk

Cyber

By Branislav Urosevic

Criminals are using artificial intelligence to scale cyberattacks at unprecedented speed, shifting the threat landscape through what experts describe as the “three Vs” – soaring volume, faster velocity, and greater variety of attacks – while insurers warn that AI’s own probabilistic errors are creating new insurable risks.

Luigi Lenguito (pictured centre), CEO of BforeAI, and Michael Berger (pictured centre right), head of AI insurance at Munich Re, unpacked the risks and opportunities of AI during a panel at the National Insurance Conference of Canada (NICC) in Gatineau.

Lenguito framed the shift in terms of three dimensions: volume, velocity, and variety. Together, he said, they are amplifying the threat landscape at a scale insurers have never had to underwrite before.

Volume: from “script kiddies” to autonomous criminals

Historically, many cybercriminals lacked the skill to develop their own tools, instead recycling malware built by more sophisticated groups. In industry slang, they were “script kiddies.” But AI is lowering the barrier to entry, Lenguito warned, making it easier for would-be attackers to operate autonomously.

“This technology is giving access to many more wannabe criminals,” he said. “They are becoming autonomous, they can use these events and other tooling to build their own criminal tool.”

He pointed to a Canadian example: a group that developed a fully autonomous phishing toolkit capable of launching large-scale campaigns without human oversight. “We expect a huge volume of attacks as we are already seeing them in the last nine months,” Lenguito said, noting a 600% growth in attack infrastructure compared to the prior six months.

The implication for insurers is profound: cyber incidents that were once limited to a handful of sophisticated operators are now within reach of thousands of lower-skilled actors, multiplying potential losses, he said.

Velocity: AI collapses attack timelines

AI is also speeding up how quickly criminals can probe and compromise systems. Where attackers once needed weeks to study a target’s infrastructure, AI models can compress that reconnaissance into hours or even minutes.

Lenguito cited an example from a UK telecommunications provider that faced a surge in credential-stuffing attacks – where attackers test stolen usernames and passwords to gain unauthorized access. Traditionally, these attacks had a success rate close to 0.5%. But by training machine-learning models on 10 years of leaked credentials, attackers pushed success rates above 50%.

“This model basically knows the password that people have used in the last 10 years,” Lenguito explained. “So it was still totally guessing, but from a restricted set of options.”

That acceleration changes the defensive calculus: insurers and their clients can no longer assume that time is on their side. Once an exploit vector is found, it can be scaled and replicated almost instantly, Lenguito warned.

Variety: new forms of deception

Finally, AI is driving a proliferation of attack types. Deepfakes have already captured headlines, but Lenguito emphasized that impersonation scams are now targeting HR and recruitment functions with alarming regularity.

Criminal groups are increasingly targeting hiring processes from both directions. On one side, they impersonate companies, luring job seekers with fake postings and then demanding payment for equipment or onboarding fees. On the other hand, he said, they pose as candidates themselves – sometimes linked to state-backed groups – attempting to infiltrate organizations under false identities.

These tactics are not theoretical. Companies such as Atlassian and Indeed face thousands of such attempts daily, Lenguito noted. The trend echoes high-profile cases in which North Korean operatives posed as remote workers to infiltrate Western firms. For insurers, the HR function – once considered a low-risk back office – is emerging as a potential point of systemic vulnerability, he added.

Lenguito argued that the scale of the challenge makes traditional “detect and respond” models obsolete. “We cannot afford to detect the attack and then respond and be reactive. We need to stop it before it happens,” he said.

The insurable risk of AI itself

While Lenguito focused on how AI is weaponized by criminals, Michael Berger of Munich Re turned the lens inward: on the risks companies face when they rely on AI themselves.

Businesses are rapidly deploying generative AI to cut costs, streamline customer service, and support decision-making. But these models are probabilistic, not deterministic – meaning that even well-governed systems will inevitably make mistakes.

“Even the best AI and generative AI models will create errors,” Berger said. “It’s just a matter of probability. It’s not something which can be deterministically avoided.”

That unpredictability makes AI adoption a double-edged sword: it creates business value but also introduces new liabilities. High-profile cases, such as AI systems giving incorrect legal or travel information, illustrate the reputational and financial risks.

For Munich Re and its subsidiary HSB, this opens a path for insurance innovation. Berger described the idea of “residual error insurance”: coverage that complements technical AI governance by transferring the financial risk of inevitable model mistakes.

“With AI adoption comes a new fundamental risk, namely this probabilistic nature of the correctness of those outputs,” he said. “But with this new risk also comes new opportunities for the insurance industry, by complementing governance with financial risk transfer.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!