New Zealand’s insurance dispute resolution body is urging consumers to fact-check artificial intelligence-generated advice before relying on it in complaints against insurers, warning that inaccurate or misleading outputs are increasingly creating unrealistic expectations and complicating the complaints process.
The Insurance & Financial Services Ombudsman Scheme said consumers are increasingly turning to AI tools to help draft insurance complaints or understand likely dispute outcomes, but some responses generated by those platforms have included incorrect legal interpretations, fabricated case references and misleading statistics about complaint success rates.
Karen Stevens (pictured), Insurance & Financial Services Ombudsman, said the trend reflects broader adoption of AI across financial services but warned that unverified outputs can create confusion for complainants and friction in dispute resolution.
In one example cited by the IFSO Scheme, a Google AI-generated summary reportedly suggested insurance claim decisions are “frequently overturned” when consumers complain and that “up to 80-90% of cases can result in success if people persist” - assertions Stevens said are misleading.
“Many people come to us hoping we can put things right, but if the law or the contract does not support their position, we cannot change that, even where the outcome feels unfair,” she said.
The warning comes as AI adoption accelerates across the insurance sector globally, with carriers, brokers and claims teams increasingly deploying generative AI for customer service, underwriting and administrative support. However, regulators and ombudsman bodies have repeatedly cautioned that AI tools can oversimplify nuanced policy language, misinterpret jurisdiction-specific rules and produce so-called “hallucinated” content.
The IFSO’s comments highlight a growing operational challenge: customers arriving at complaints processes with AI-generated submissions that may be lengthy, legally over-engineered or based on flawed assumptions.
Stevens said some AI-assisted complaints received by the scheme have run to hundreds of pages, often obscuring the underlying issue rather than clarifying it.
“We have seen complaints which are 300 pages long. But more words are not necessarily better. Clear information about what has gone wrong is much more useful than multiple pages referencing legislation and case law,” she said.
The ombudsman also warned consumers to be cautious about uploading sensitive personal or financial information into public AI platforms, particularly free tools where data storage and reuse practices may be unclear.
The IFSO said consumers using AI to assist with complaints should verify outputs against trusted New Zealand-based sources, review their policy wording directly, and ensure any AI-drafted submission accurately reflects the core issue in dispute.
While AI can provide useful support, Stevens said, consumers should treat it as an aid - not a substitute for policy wording, legal context or professional advice.