For years, cyber risk conversations have centred on patching, perimeter controls and the latest vulnerability. Rafael Sanchez (pictured), head of cyber services at Beazley, thinks that the focus misses where the most durable threat really lies.
“Social engineering is very prevalent,” he said. “One of the key elements to social engineering’s enduring appeal to threat actors is that it leverages identity.”
In practical terms, that means attackers are less interested in exotic zero‑days than they are in persuading someone inside an organization to do something on their behalf – click a link, share credentials, pay a fake invoice or open a door, digital or otherwise.
“At the most basic level, I have received emails from someone from my company. I have not received emails from you,” he told Insurance Business Canada during aninterview. “If I receive an e‑mail from them, I am more likely to trust or do something that they are asking me to do as a result of me knowing them. So their identity is of value to threat actors.”
Sanchez has been working in privacy and incident response for more than 25 years and now oversees Beazley’s global cyber services function. He has watched claim patterns shift as companies are pushed to improve their core defences.
From 2020 through 2022, the market saw a steep rise in ransomware claims as organizations rushed to enable remote work and left remote‑access doors open or unsecured. In response, underwriting tightened and minimum controls such as multi‑factor authentication and better patching became non‑negotiable.
“Organizations are getting more mature on their infrastructure side,” he said, pointing to obligations under sector‑specific rules and new digital‑operational‑resilience regimes overseas. In Europe, frameworks like DORA and NIS2 have forced financial institutions and other critical sectors to raise their baseline security. In Canada, similar pressures are emerging through federal guidance and industry regulation.
As those controls become standard, Sanchez expects opportunistic attacks on unprotected infrastructure to become more expensive and less attractive for criminals.
“You have organizations that are protecting themselves more and more,” he said. “The kind of easy attacks that they were doing during the pandemic when people were turning things on so that you could work remotely but not really understanding the risks – those things are becoming more complicated and expensive for attackers to leverage.”
What does remain cheap and scalable is anything that exploits trust between people.
“We see concerted campaigns of spear phishing against organizations that span a year,” Sanchez said, noting that finance teams and senior executives are popular targets because their identities are especially valuable. “Anything to do with identity, I feel, is going to be a very enduring and popular attack method.”
Artificial intelligence is often portrayed as the next great disruptor in cybercrime. Sanchez is more measured. For now, he believes AI is being used primarily as a research accelerator, not as a full “end‑to‑end” attack engine.
“They’re using AI to do OSINT – open‑source intelligence – to do research, make things quicker,” he said, describing tools that can scan breach dumps and public data to find useful leads. “I don’t think AI is being used extremely effectively by threat actors. I don’t think AI is being used extremely effectively by organizations.”
Current tools help attackers sift through huge data troves compiled from years of breaches – email addresses, usernames, partial passwords, social media handles and more. Sanchez pointed to large aggregated datasets on the dark web that bundle information from dozens of incidents into a single “mother of all breaches”‑style file.
From there, AI can be pointed at specific questions: find people who work in finance at a given company, identify those who have expressed controversial political opinions online, or surface personal details that might lend credibility to a phishing email.
“What I do need is a convincing reason why, if I send you an e‑mail, you’d want to respond to me,” he said. AI makes crafting that pretext faster and more tailored.
The real inflection point, in his view, would be a move from today’s “research agents” to more autonomous, agentic AI that can take actions on its own – the digital equivalent of a bot that doesn’t just suggest flights but books them with your credit card.
“At the moment, I would say the barrier to any real escalation of severity for the use of AI is that it requires too much effort,” Sanchez said. Building and safely running agents that can handle an entire attack chain from reconnaissance to exploitation to negotiation is not trivial work, even for well‑resourced state actors.
“But if you were to have more effective use of end‑to‑end AI, you could see an increase in the number of attacks,” he added. Volume, more than size, is what concerns him.