Artificial intelligence (AI) tools are spreading rapidly across industries, offering organizations new tools for efficiency, insight, and growth. But as adoption accelerates, so too does the exposure to cyber risk.
While AI can strengthen defenses, its capabilities are also being harnessed by cybercriminals. The World Economic Forum reported a 223% surge in generative AI applications on the dark web between 2023 and 2024. Meanwhile, a survey by cybersecurity training firm SoSafe in March this year found that 87% of security professionals had encountered an AI-powered attack against their business.
Greg Scoblete (pictured), principal at Verisk's emerging issues team, highlighted two ways generative AI models are creating new attack surfaces for organizations during a recent webinar on cybersecurity and AI.
“This is a technology that both amplifies risk and creates new opportunities to mitigate it,” Scoblete said.
One major area of concern is adversarial machine learning, a family of cyberattacks targeting AI models at various stages of their development. Scoblete warned of two forms gaining attention: poisoning attacks and privacy attacks.
“Data poisoning refers to attempts to interfere with an AI model’s outputs by tampering with the data used to train it,” Scoblete said.
Poisoning can occur actively, when a hacker or insider inserts corrupted files into a training dataset, or passively, when poisoned data is unknowingly incorporated.
In one 2023 example, Scoblete said researchers developed a tool to embed tiny amounts of corrupted data into digital artwork. These files were invisible to the naked eye and difficult for automated tools to detect. If scraped and used in AI training, they could degrade a model’s outputs.
The threat is not only effective but also inexpensive. “Researchers last year showed they could poison 0.01% of a popular training dataset for just $60,” Scoblete said.
Two features make poisoning particularly systemic:
Federated learning, where multiple organizations jointly train a model while keeping custody of their own data, also carries risk because if even one participant is compromised, the shared model can be corrupted.
Privacy attacks, by contrast, target models that have already been trained and deployed. These attacks can extract sensitive data, reveal details of how a model works, or even replicate the model itself.
The risks are significant because AI models often contain personally identifiable information, intellectual property, and trade secrets.
Scoblete also highlighted the issue of data seepage, which is when AI systems inadvertently expose sensitive information or when human users upload confidential data into AI tools without safeguards.
“In 2023, a transcription tool accidentally distributed confidential meeting notes to the wrong participants,” he noted. “It wasn’t hacked – it just made a mistake.”
Human missteps are another weak point. A widely reported case involved a technology employee uploading proprietary source code to a public chatbot. “That incident made headlines and spooked a lot of CIOs,” Scoblete said.
Yet corporate governance remains uneven. According to an IBM survey, only 37% of organizations have any governance around AI use. “So we should expect more headlines about data seepage in the future,” he warned.
The third risk category is AI agents, also known as agentic AI. These systems extend the capabilities of large language models by allowing them to operate autonomously.
“Think of it as robotic process automation taken much further,” Scoblete said. “Instead of just automating repetitive tasks, agents are designed to work in less structured environments, interpreting natural language prompts and acting with a high degree of autonomy.”
Theoretically, AI agents can surf the web, access datasets through APIs, write and execute code, conduct online transactions, and even program their own sub-agents. But this autonomy carries profound risks, including errors and “hallucinations” that can cause data seepage.
“The more freedom you give an AI agent, the greater the chance it could misuse or expose sensitive data, disrupt business systems, or be hijacked by hackers,” Scoblete said. “And because they mimic human actors, there’s a risk that hackers could impersonate AI agents, or worse, that employees could mistake a human hacker for a legitimate AI agent inside company systems.”
As these risks emerge, clients will increasingly look to their brokers for guidance on how to manage AI exposures and how insurance coverage can respond.
Industry specialists suggest brokers focus on practical steps: