Artificial intelligence may be reshaping how insurers assess cyber risk, but it’s also expanding the threat surface in ways few companies are prepared to manage. At the National Insurance Conference of Canada (NICC), cybersecurity and AI specialists warned that “shadow AI” – the unmonitored use of generative models across business functions – is rapidly emerging as a critical blind spot for underwriters and insureds alike.
Paul Caiazzo (pictured centre left), chief threat officer at Quorum Cyber, said his team now handles “hundreds of ransomware cases a year” and sees the same systemic weaknesses repeating across industries.
“One of the things I think we should all be taking into account is shadow AI, which is just the next generation of shadow IT,” he said. “It’s very common today to see shadow AI where somebody has an LLM being used for some very specific thing that the governance structure within the organization isn’t aware of.”
Caiazzo said many companies have drafted AI governance policies on paper but fail to enforce them in practice. Employees may use large language models or plug-ins for tasks like coding, document summarization, or marketing without disclosure – potentially exposing confidential data to third parties.
“I would suggest that we all ask our customers what is authorized use of AI in their organization, and what governance controls are in place to allow for or disallow certain types of usage,” he said.
At the core of this problem, he added, is data governance – the same issue that drives most cyber incidents.
Whether the adversary is motivated financially or for espionage, they’re still trying to access the same set of data, Caiazzo said.
“If we can apply good controls around who can access it, in what manner, and for what purpose, we reduce that risk. It’s not as sexy as AI-security controls, but it’s incredibly critical.”
Despite growing awareness, Caiazzo said most victims of large-scale ransomware incidents still fail at basic cyber hygiene. “It’s often a pretty soft target,” he said. “They lacked business controls, or technical controls that could have stopped the adversary from accomplishing their objective in the first place.”
He urged insurers and brokers to press clients for documentation outlining what data they hold, what AI systems are authorized, and how access is managed. Otherwise, he said, you might wind up back “in the same place we were several years ago with shadow IT – and that being the initial access point behind a certification,” he said.
The shift toward data theft over encryption, he added, is another sign that attackers are adapting.
“We’ve seen major ransomware organizations pivot away completely from encryption just to data exfiltration,” Caiazzo said. “Understanding what data you’ve got and what controls you have around it are exceptionally critical.”
Security awareness training, too, remains a top priority, and it’s only going to become more important as adversaries gain the volume and velocity that AI can convey, he said. Phishing attacks are going to get better and more frequent, and users need to be better prepared to recognize them, he warned.
While AI introduces new vulnerabilities, it also brings the promise of proactive defense. Luigi Lenguito (pictured centre), CEO of BforeAI, said the technology can be used to detect attacks earlier – even before threat actors execute them.
“An area that is going extremely under-represented in protection is all the supply chain and the supplier-client infrastructure connection,” he said, noting that a major car manufacturer recently shut down its systems after an infiltration through a supplier network. These are extremely complex, massively interconnected systems where humans cannot access data in a timely fashion, but the use of AI can help immensely, he added.
Lenguito said AI can augment security teams’ ability to map hidden relationships and spot anomalies faster than human analysts.
But he cautioned that the human factor remains a critical weakness – citing an incident in which a customer-support agent for a major cryptocurrency exchange was bribed to provide access to internal systems.
“The more the knowledge of the company gets absorbed by AI models, the more dependent the business becomes on those models,” Lenguito said.
To counter that, he said AI can also help enforce human role boundaries, identifying behaviors inconsistent with an employee’s function or access level.
Lenguito pointed to BforeAI’s own predictive analytics platform as an example of “prescriptive defense” – using AI to detect attack infrastructure before it launches.
“We are bringing technology that enables us to see the infrastructure of the criminal before it gets ready for an attack,” he said.
He added that preventing attacks upfront not only reduces loss severity but also cuts costs dramatically. Still, even the most advanced AI systems are not infallible. Lenguito said predictive models are correct “99 to 99.5 percent of the time,” but false positives remain a challenge.
“We need to know the limitations,” he said. “In our case, we back our predictions with a pre-trial performance guarantee – if one of our prediction failures causes a loss, we fund our customer 10 times the contract value.”