Strategic Consulting & Industry Transformation

Insurance is failing AI, but why? 

Artificial intelligence is no longer a speculative frontier for insurers; it is already reshaping how risks are priced, claims are handled, and customers are served. Across the United States insurance market, however, many AI programs stall or quietly underperform not because the algorithms are weak, but because the operating model around them is misaligned with how AI truly creates value. 

Insurers commonly layer new tools onto legacy processes, bolt pilots onto fragmented data estates, or conceive of grandiose outcomes that cannot be fulfilled. The result is predictable: promising proofs of concept that never scale, mounting technical debt, and growing skepticism in the C-suite.

The industry’s emerging leaders are taking a different path. Global carriers, such as AXA and Zurich, are treating AI as an enterprise transformation lever, not a technology add-on, anchoring deployments in robust governance, standardized data, and operating models built for continuous learning and rapid iteration. They are redesigning workflows, decision rights, and incentives so that underwriters, claims handlers, and distribution teams can actually trust and use AI-driven insights at scale. In this model, governance is not a brake on innovation but a strategic part of it. 

For US carriers facing margin pressure, evolving regulation, and rising customer expectations, the implications are clear. Winning with AI will depend less on who has the flashiest model and more on who builds the most adaptive, AI-native operating model. This report explores what that shift requires in practice – and how insurers can realign governance, data, and culture so that AI investments translate into sustained, enterprise-wide impact.

 

Building a digital-native brain for insurance AI


AI is now at the core of how leading insurers price risk, but across much of the US market, AI performance still lags the hype. The problem is that failure commonly occurs due to a misconception of the technological limits and because operating models are misaligned with how AI truly works. Insurers invest in powerful models, but surround them with legacy processes, siloed decision rights, and brittle governance. In that environment, AI is a flashy and often underperforming bolt-on tool instead of a “digital-native brain” at the heart of the enterprise.

Moder, which delivers digital solutions for the insurance industry, has extensive AI expertise and resources.

“In the insurance companies I know today, there is a ton of experimentation happening with AI, and nine out of 10 experiments are failing or failing to launch,” says Vikram Talwar, executive vice president and global business head of insurance.  “It can be that the AI looks very nice, it demos well, but [the companies] still need human beings to do the core job, and that’s a self-defeating situation.”

Having a digital-native brain requires designing an operating model in which insurers remain firmly in charge, setting strategy, defining guardrails, and deciding where human judgment is essential, while AI augments every major decision with speed, breadth of data, and pattern recognition that humans alone cannot match. To get there, carriers must stop treating AI as a series of isolated projects and start redesigning how work gets done.
 

“Tools are only as smart or as stupid as their users. AI, in my opinion, is a tool. It’s brilliant in the hands of brilliant people, and it’s totally stupid in the hands of others”
Vikram TalwarModer

 

Keeping control: AI as augmentation, not autopilot

 


Insurance is about assessing risk and has always been an information and judgment business. The art is to elegantly let AI amplify that judgment. In practice, this means reengineering workflows so that underwriters, claims adjusters, and distribution teams receive AI-generated recommendations in context, with clear explanations, confidence levels, and escalation paths. Leading carriers are already doing this using AI to pre-triage claims, surface risk signals, and suggest next-best actions while keeping humans as the final decision-makers in complex or sensitive cases.

This augmented model only works if executives treat AI as a strategic shift in how decisions are made. Studies of failed AI programs in insurance consistently find that initiatives flounder when they are delegated to IT or innovation labs without redesigning accountability, incentives, and governance across the business. In other words, AI does not wrest control, but insurers give it away when they fail to adapt their operating models.

Emphasizing Moder’s approach, Talwar says, “We’ll run your operations, we’ll make you smarter and more intelligent, and we’ll give you line of sight to what’s going on, as the biggest thing most operating people in the world have an issue with is losing control.”

Strong data, strong process: the foundations of a transformative agenda 


A transformative AI agenda rests on two interconnected elements: robust data and reengineered processes. Industry experience shows that “data readiness” is not just about ingesting more datasets; it is about workflows, core systems, and access control readiness. If an insurer cannot reliably pull clean, timely, high-trust data into an AI-enabled workflow, even the most sophisticated model will never scale beyond a pilot.

At the same time, process redesign is a strategic decision, not a technical one. Without rethinking how claims, underwriting, and service journeys flow end-to-end, AI becomes a narrow optimization tool that improves one step while the overall outcome remains unchanged. For example, a fraud model might accurately flag suspicious claims, but if investigation capacity, prioritization rules, and approval thresholds remain untouched, loss ratios barely move.

The carriers pulling ahead are embedding governance and process rigor into the heart of their AI strategy. For instance, Zurich’s AI Assessment Framework, launched in 2022 and built on OECD AI principles, is tightly integrated into its MLOps pipeline so that fairness, reliability, privacy, and accountability checks are automated rather than bolted on at the end. This kind of “compliance by design” enables the business to scale AI confidently, knowing that controls keep pace with experimentation. For US insurers facing intensifying regulatory scrutiny, a similar governance-first approach is essential to using AI ambitiously without losing control.

Think in modules, not moonshots 


Another recurring pattern in stalled AI programs is the urge to “do everything at once.” Insurers scatter their bets across dozens of pilots, such as chatbots and triage models, without a coherent architecture and being fixated on year-long plans to change their business. 

Being disciplined and remaining focused are characteristics that Talwar stresses.

“Building a good strategy and a good transformation program is about challenging yourself to say no to 90 percent of things,” he says.

The alternative is a modular, puzzle-like strategy. Instead of a single moonshot, insurers define a target AI-enabled operating model for a priority domain, then build it piece by piece with ideally three-month checkpoints using a blueprint:

  • standardizing and governing the core data objects (customers, risks, policies, and losses)
     

  • redesigning a few critical workflows (submission triage, appetite checks, pricing recommendations) to be AI-assisted from day one
     

  • introducing decision support interfaces for underwriters, with transparent rationale and feedback loops into the models
     

  • automating the orchestration – how tasks, approvals, and exceptions flow across humans and systems

Each module becomes a self-contained success story characterized by measurable lift in loss ratio, expense savings, or cycle time. Those wins, in turn, fund and derisk the next pieces of the puzzle. This modular approach avoids “pilot purgatory,” builds organizational trust in AI, and creates reusable components – data pipelines, governance patterns, and UI patterns – that can be carried into adjacent lines and geographies.
 

“If you have a five-year transformation program, you don’t even need to tell me what the program is, I’ll tell you it’ll fail. Time kills everything”
Vikram TalwarModer 

 

Agentic AI: unlocking the next wave of value


Generative AI has already changed how insurers handle content, communication, and basic knowledge work. The next leap, however, will come from agentic AI – systems that can perceive context, plan multistep tasks, take action across tools, and learn from outcomes with limited human intervention. Recent research describes how agentic AI in insurance moves beyond static prediction to “self-managed choices and goal-directed learning” across underwriting, risk modeling, fraud detection, and claims.

In practice, agentic AI operates like a digital team member. In life insurance underwriting, for instance, it allows an AI agent to ingest application data and medical records, interpret underwriting guidelines, plan the evaluation path, fetch supplemental data, draft a risk decision, escalate edge cases, and learn from final outcomes to refine future decisions. Similar patterns are appearing in distribution, where agents can query multiple carrier APIs, compare coverage options and prefill applications before a human broker or customer approves the final choice.

For American insurers, agentic AI is the key to unlocking AI’s full potential, but only if the operating model and company culture is prepared. Agentic systems interact with core platforms, trigger financial transactions, and influence customer outcomes in real time. That raises the stakes on governance, observability, and human-in-the-loop design. Therefore, carriers must clearly define:

  • which decisions agents may fully automate and which require human sign-off
     

  • how exceptions and edge cases are surfaced to experts
     

  • how accountability is assigned when agents act across organizational boundaries

When those guardrails are in place, agentic AI can become the engine of a truly digital-native brain by continuously sensing, deciding, and acting across the insurance value chain while humans steer strategy, ethics, and complex judgment.

From misalignment to momentum


The core message for US insurers is simple: AI underperforms not because the technology is immature, but because many organizations do not have knowledge of how to scale and build it into their business. Operating model misalignment – fragmented data, static processes, siloed governance – is what turns promising proofs of concept into stranded assets.

The path forward is to build a digital-native brain that keeps insurers firmly in control:

  • 🤝Treat AI as augmentation, not autopilot, with humans owning the hardest calls.
     

  • 🧱Anchor every initiative in strong data foundations, redesigned processes, and embedded governance.
     

  • 🧩Pursue a modular, puzzle-piece strategy that proves value in one domain, then scales.
     

  • 🤖Prepare now for agentic AI by defining the roles, guardrails, and interfaces that enable autonomous systems to operate safely at the heart of the business.

Insurers that make this shift will not just deploy more sophisticated models; they will rewire how the enterprise thinks and acts. In doing so, they will finally align their operating models with how AI truly works and convert today’s experiments into tomorrow’s enduring competitive advantage. 

Keep up with the latest news and events

Join our mailing list, it’s free!