Artificial intelligence is rapidly reshaping the risk landscape, but while the exposures are multiplying, figuring out how to measure them is proving even harder. For insurers and brokers, the challenge is not just keeping pace with technology – it’s finding ways to quantify risks that have no historical data and evolve faster than traditional actuarial models can adapt.
James Bullock-Webster (pictured), director and head of tech, media and cyber at New Dawn Risk, told Insurance Business that the pace of change is what makes AI so uniquely disruptive.
“It’s extremely difficult,” he said. “The pace of technological change makes quantifying AI-related risks extremely challenging. We must learn from industry experts and continue to monitor emerging threats. Underwriters will undoubtedly approach these sectors with a degree of caution.”
One of the biggest obstacles is that insurers are not technologists. They understand risk, but they often lack the deep technical knowledge required to evaluate how AI systems actually function. Bullock-Webster described a recent initiative at Lloyd’s of London to address this gap.
His team invited an AI developer – someone who had built and sold a large language model and was already working on a second generation – to give a two-hour seminar to underwriters.
“It was fascinating,” he said. “This is where AI is, this is where it’s going – and that’s what we need to do: bring real experts in to help educate the market and build up knowledge so insurers can react.”
By opening the door to direct input from technologists, insurers can ground their underwriting assumptions in a more realistic understanding of how AI tools are developed, deployed, and attacked. It also helps them distinguish between hype and actual risk – something that is particularly important when dealing with a technology that is evolving at breakneck speed.
Some of the first cracks are already visible. Bullock-Webster pointed to ongoing lawsuits against large language model developers accused of copyright infringement. While the legal outcomes remain uncertain, the cases illustrate the kinds of novel liabilities that can surface when new technologies collide with existing legal frameworks.
“It’s that sort of interaction between existing technology and emerging technology where the frictions are not known,” he said. “How insurers analyze that in terms of what that might look like in the future is really, really challenging.”
The list of potential exposures is expanding: from deepfake-driven fraud and data poisoning attacks to regulatory investigations into algorithmic bias or discriminatory outcomes. Each of these risks lacks the kind of actuarial track record that insurers typically rely on.
That absence of data makes underwriting far more complex. Insurers are accustomed to quantifying risk based on decades of claims experience, but AI-related risks are moving too quickly for that playbook to work. Instead, carriers must build frameworks in real time, experimenting with policy language and exclusions while continuing to learn from incidents as they unfold.
For brokers, the challenge is not just placing policies but helping clients and carriers make sense of these emerging exposures. Bullock-Webster argued that brokers can play a unique role as interpreters between two very different worlds.
“Insurance and technology speak completely different languages,” he said. “Even if they’re both speaking English, it’s a hybrid of technical jargon. Brokers can add value by bringing those worlds together and helping synthesize the conversation.”
That role could become even more important as AI becomes embedded in insurers’ own operations. If carriers use AI to automate underwriting or claims, mistakes could create new liability exposures – wrongful denials, discriminatory decisions, or systemic errors. Brokers will need to understand not just the products they place but also the processes behind them, ensuring clients are aware of both the benefits and risks, he said.
Bullock-Webster suggested this evolution may even create demand for new types of professional liability insurance, protecting intermediaries who are caught in disputes over AI-driven decisions.
For now, the path forward is still uncertain. Bullock-Webster emphasized that the insurance industry is in the early stages of grappling with AI, and that developing effective risk transfer solutions will require patience, collaboration, and a willingness to learn from technologists.
“The answer is, there’s still a long way to go,” he said. “But if we can bridge the gap between insurance and technology and get them to talk to each other, brokers can help by being that interpreter. That’s how the industry can add value and remain relevant in this technical age.”