There’s growing recognition among industry experts that artificial intelligence (AI) is set to reshape the insurance landscape – altering workforce skill requirements and enabling a shift toward more proactive and predictive approaches to risk and claims management.
Whether it’s generative AI, machine learning, or emerging agentic models, it’s becoming increasingly clear that AI will touch every role in the insurance value chain in some capacity.
This is according to Laura Doddington (pictured), head of personal and commercial lines consulting and technology for North America at WTW. She says that a one-size-fits-all strategy won't suffice when it comes to implementation of AI. Doddington emphasizes the importance of tailored planning to navigate the complexities of an AI-driven future in insurance.
As she put it, at this phase telling apart “hype from true promise” is paramount.
Doddington believes the true value for insurers lies not in grand promises, but in targeted applications.
“The key is to focus on specific use cases,” said Doddington. “That’s where we’re starting to see AI gain traction.”
One such area, she pointed out, is machine learning, a form of AI that has been used in insurance for years but still holds untapped potential. Doddington highlighted claims triage as a prime example, where models help carriers quickly determine whether a claim is straightforward and can be fast-tracked, flagged for potential fraud, or requires more in-depth review due to its size or complexity.
“You can definitely triage claims more effectively,” she said.
Machine learning is also being applied to underwriting support. These tools don’t replace underwriters, Doddington emphasized, but provide recommendations on the next best action, which can help professionals prioritize tasks.
However, while machine learning lays the groundwork, generative AI – and particularly large language models (LLMs) – is rapidly gaining attention.
“Most people thinking about AI today really mean Gen AI,” she said. One of the most transformative areas, she said, is the ability to process and analyze unstructured data – such as freeform text in claims reports or call center transcripts. Historically, this information was difficult to utilize at scale, but LLMs now make it possible to extract, organize, and analyze it across thousands of data points.
“You might have a claims handler who’s entered detailed notes – who hit whom, what happened, where,” Doddington said. “Now, AI can extract and structure that data for broader analysis, which can then feed into underwriting models or fraud detection.”
She added that text summarization is another growing application, particularly in customer service environments like call centers, where AI can quickly distill the essence of long customer interactions.
Looking ahead, Doddington pointed to agentic AI as the next frontier – AI systems that can autonomously make and communicate decisions, acting like virtual agents. While still in its infancy in insurance, she said the industry is beginning to explore how these tools could be deployed in more routine decision-making environments.
“It’s early days, but we’re starting to see conversations around how agentic AI might automate parts of the process,” she said.
While optimism around artificial intelligence is high, Doddington cautioned that some expectations in the insurance space remain overly ambitious – particularly when it comes to full automation.
“There’s a lot of talk about AI replacing all jobs in insurance,” she said. “But that’s just not realistic.”
Doddington emphasized that insurance is a highly regulated industry where advice, accuracy, and human judgment are critical. While AI can assist with certain tasks, especially in data processing or routine workflows, she argued that it’s not yet reliable enough to operate independently – particularly in customer-facing roles.
“Would I have AI talking directly to my customers today? Probably not,” she said. “Hallucinations still happen far too often. You can’t afford that kind of risk in an industry where the quality of advice really matters.”
This is especially true in underwriting, claims, and customer communications – areas where misinformation or inconsistency could carry significant legal or reputational consequences. Doddington noted that AI hallucinations, or confidently incorrect outputs, remain a well-documented issue with large language models.
The hype, she said, stems from the misconception that AI systems can be left to operate entirely on their own. In reality, human oversight remains essential.
“You still need guardrails,” Doddington said. “You might have AI doing parts of the work, but you need humans reviewing the output, validating decisions, and making sure the technology is used responsibly.”