Threat actors are now outsourcing part of their ransomware operations – to artificial intelligence.
Speaking on a panel at the National Insurance Conference of Canada (NICC) in Gatineau, Paul Caiazzo (pictured centre left), chief threat officer at Quorum Cyber, revealed that his team has increasingly encountered AI chatbots managing the opening stages of ransom negotiations.
“We often find that we're initially interacting with an AI chatbot in those situations, which is a new development over the last year,” Caiazzo said.
For financially motivated cybercriminals, AI offers the same advantage it does for legitimate businesses: efficiency. Where companies use AI to streamline customer service or transaction processing, ransomware groups are using it to scale their extortion rackets.
Caiazzo explained that the bots typically handle the intake phase of a negotiation, posing as human interlocutors and gathering information from victims. Once thresholds are crossed – such as a ransom demand hitting a certain dollar figure or the process dragging on too long – the interaction shifts to a live criminal operator.
“They are basically trying to make it more efficient for them to get through the targeting and the initial access to the organizations that they target,” he said.
While AI-driven chatbots are changing how ransomware negotiations unfold, Caiazzo stressed that criminals are also finding ways to enhance their entry points.
Cybercriminal groups are increasingly using deepfake technology and other AI tools to supercharge spear-phishing campaigns, he said. By tailoring messages to organizations seen as more vulnerable or more likely to pay a ransom, attackers can significantly raise their success rates, he added. These AI-enhanced campaigns often appear authentic enough to bypass traditional defences, putting unprepared businesses at higher risk of compromise.
For businesses and end users, that means phishing emails and messages are becoming harder to detect. AI-generated content can convincingly mimic executives, customer service representatives, or even colleagues – raising the risk that staff will click on malicious links or authorize fraudulent payments.
“That manifests for the end users as very, very convincing phishing attacks,” Caiazzo said. “And that’s probably the most common thing that we see on a regular basis.”
Caiazzo was careful to temper the more apocalyptic narratives about malicious AI. He said attackers are not unleashing autonomous, end-to-end cyber armies; instead, they are borrowing the same productivity playbook businesses use. Rather than inventing brand-new technical exploits, most financially motivated groups are applying AI to low-complexity, high-payoff tasks: crafting believable lures, automating outreach, and handling routine administrative steps in an operation.
In practice that looks like two things. First, AI helps scale and sharpen social engineering – deepfakes and tailored messaging let criminals impersonate executives or suppliers with uncanny accuracy, improving click and payment rates. Second, AI handles commodity work that used to be human drudgery: intake, screening, initial negotiation and customer-service style interactions during extortion. Caiazzo’s team increasingly encounters chatbots conducting the opening phases of ransom talks; humans step back in only once thresholds or complications require it.
More sophisticated uses – automated vulnerability discovery, fully autonomous attacks, or advanced, AI-directed exploitation of new technical flaws – remain largely the domain of well-resourced nation-state actors. For most criminal groups, the rational choice is to squeeze efficiency out of existing methods that reliably work against people, not to invent entirely new classes of attack.
The implication for organizations, as Caiazzo said, is clear: the weakest link remains the user. Investments in detection and tech matter, but so do basics – phishing awareness, verification procedures for financial requests, and robust escalation processes when unusual demands arrive. Defenders also need tools to spot AI fingerprints (for example, signs of synthetic media or bot negotiation patterns) and to adapt incident response playbooks to conversations that may start with an automated interlocutor.
For insurers and risk advisers, the rise of AI-assisted extortion underscores a shifting claims landscape: faster-moving, more convincing scams, and a need for response services that can triage automated extortion workflows quickly. As Caiazzo put it on the panel, attackers are optimizing their criminal business models – and defenders must optimize theirs in response.
“But it’s nothing like Skynet,” he said.