US consumers are using artificial intelligence tools more than ever and are increasingly open to insurers deploying the technology – but remain wary of AI making key coverage and claims decisions, according to Insurity’s 2026 AI in Insurance Report.
The survey of more than 1,000 US adults, conducted online in February 2026, pointed to a marked shift from the skepticism seen a year earlier. While overall trust in AI‑driven insurance decisions is still limited, the findings suggest policyholders are moving beyond novelty and are now judging AI on how, rather than whether, it is used.
According to the report, 84% of consumers now use AI tools at least occassionally, and 27% said they use AI daily. As generative tools become embedded in writing, workplace productivity, health queries and financial comparisons, AI is “no longer viewed as experimental technology but as part of how consumers make decisions and manage everyday risk,” the firm said.
That familiarity is feeding through to insurance. In 2026, 39% of respondents said it is a good idea for their insurer to use AI to improve services – nearly double the 20% who expressed support in 2025, when Insurity’s prior survey highlighted a deterioration in sentiment. Last year, only one in five Americans thought it was a good idea for P&C carriers to leverage AI, down from 29% in 2024, and 44% said they would be less likely to buy from an insurer that publicly used AI.
Resistance is easing but has not disappeared. In 2026, the share of consumers who said they would be less likely to purchase a policy from an insurer that publicly uses AI has declined to 36%, still more than a third of the market.
“Consumers have moved past the hype cycle,” said Jatin Atre, president at Insurity. “They are not impressed by the fact that insurers are using AI. They care about how it is being used.”
The report drew a clear distinction between AI as a supporting tool and AI as an autonomous decision‑maker.
Consumers showed more comfort with AI handling routine interactions in P&C. Forty-six percent (46%) said they would allow AI to generate a quote. Meanwhile, 39% are comfortable with AI tracking claim status, and 38% would use AI to update personal information.
Only 22% said they would feel comfortable with AI filing a claim on their behalf, and just 16% are comfortable with AI canceling or renewing a policy. Nearly half of respondents expressed distrust when AI is described as making determinations on claim approvals, fraud flags or policy adjustments.
Only about one-third said they trust AI-driven insurance decisions, while 26% said they need more information before forming an opinion.
Atre warned that if AI is deployed “simply to cut costs or automate decisions without explanation, trust will erode,” but that it can build confidence when used to make underwriting smarter, claims faster and interactions clearer, with visible human oversight.
The shift in sentiment comes as insurers accelerate AI investment and regulators sharpen their focus on how the technology is deployed.
A recent Accenture survey of global insurance executives found that 90% plan to increase AI spending in 2026, with many reporting early gains in combined ratios and claims efficiency from broader deployments.
At the same time, the National Association of Insurance Commissioners (NAIC) has been expanding its oversight toolkit. Its Big Data and Artificial Intelligence Working Group is piloting an AI Systems Evaluation Tool for use in market conduct and financial exams, aimed at assessing governance, risk mitigation and high‑risk models across underwriting, pricing and claims.
Consumer concerns about opaque or biased algorithms remain a core regulatory theme. The NAIC’s AI principles, adopted in 2020 and now echoed in several state bulletins, emphasize fairness, accountability and transparency, while state legislatures are beginning to consider AI‑related consumer‑protection measures that could affect insurance.
Insurity’s findings reinforce a familiar tension pressure to automate more of the value chain to reduce expense ratios and cycle times.
The survey suggests that front‑end, low‑stakes touchpoints remain the most promising areas for customer‑facing AI. Those use cases align with separate Insurity research from 2025 showing that only 15% of consumers want a fully digital, self‑service insurance experience, while nearly half prefer a “digital‑first” model with easy access to human support when needed.
By contrast, positioning AI as the primary decision‑maker on claim payments, fraud determinations or renewals risks eroding trust if not accompanied by clear explanations and appeal paths. With almost half of respondents expressing discomfort in those scenarios, insurers may see complaints and regulatory scrutiny if customers feel adverse outcomes are being driven by algorithms they do not understand.
Intermediaries may increasingly be asked to explain how particular carriers use AI in underwriting and claims, whether humans remain in the loop for hard calls, and how customers can challenge automated decisions. Those that can translate technical practices into plain‑language assurances may be better positioned to retain and attract clients as AI adoption grows.
More broadly, AI in insurance is moving from pilot projects into core operating infrastructure. Industry analyses suggest the global AI in insurance market could reach tens of billions of dollars in annual value by the end of the decade, with early adopters reporting measurable gains in loss ratios, claims cycle times and fraud detection.
Insurity’s 2026 data indicates that consumer attitudes, which dipped in 2025 as AI hype peaked and concerns about fairness and job displacement grew, are beginning to normalize as everyday usage rises and more concrete use cases emerge.