Nippon Life Insurance Company of America has filed a federal lawsuit in the US against OpenAI, alleging that its ChatGPT platform engaged in the unauthorized practice of law and interfered with a settled long-term disability claim involving an Illinois policyholder.
The complaint, filed March 4 in the US District Court for the Northern District of Illinois, names OpenAI Foundation and OpenAI Group PBC. Nippon Life alleges that ChatGPT encouraged claimant Graciela Dela Torre to seek to reopen a lawsuit that had been dismissed with prejudice under a January 2024 settlement agreement. “As Dela Torre’s legal assistant and adviser, OpenAI intentionally induced and facilitated Dela Torre’s breach of a valid and enforceable settlement agreement with Nippon by encouraging and assisting her in filing a motion to reopen a lawsuit that had been dismissed with prejudice. It also aided and abetted her abuse of the judicial process,” the insurer said in its complaint, as reported by AM Best.
The filing asserts that ChatGPT drafted legal-style arguments and court documents for Dela Torre and that the system has a pattern of “hallucinating,” or generating non-existent legal citations. Nippon Life said it has devoted “significant time and resources,” including legal fees and other costs, to respond to multiple filings connected to the dispute. According to the complaint, Dela Torre was insured under a group long-term disability policy through her employer and filed a claim on July 25, 2019, citing carpal tunnel syndrome and epicondylitis. Nippon Life approved benefits on Aug. 2, 2019, and later terminated them on Nov. 30, 2021, after determining she no longer met the policy definition of disability.
Dela Torre sued in December 2022, and the parties settled in January 2024, with Dela Torre agreeing to release Nippon Life from further legal action related to the claim. In early 2025, after her attorney advised that additional litigation would be unlikely to succeed, she turned to ChatGPT for input, the insurer said. The complaint states that “ChatGPT analysed the response and determined that Mr. Probst’s response invalidated Dela Torre’s feelings, dismissed her perspective, and deflected responsibility for her dissatisfaction. ChatGPT ultimately concluded that the tactics used in Mr. Probst’s response constituted gaslighting and were aimed at emotionally manipulating Dela Torre.”
Following that exchange, Dela Torre dismissed her legal counsel and relied on ChatGPT to prepare subsequent filings against Nippon Life and other defendants. Court records cited in the complaint indicate she has submitted numerous motions and requests for judicial notice. Nippon Life Insurance Co. of America holds a Best’s Financial Strength Rating of A- (Excellent). The dispute raises questions about the role of consumer-facing generative AI tools in contentious claims, particularly when those tools are used to interpret legal correspondence or generate litigation documents without formal legal representation.
The Nippon Life lawsuit arises alongside separate US litigation examining AI-driven decision-making in health insurance. In a class action against UnitedHealth Group Inc., a federal judge has allowed broad discovery into the company’s use of its nH Predict AI system for Medicare Advantage post-acute care claims. Plaintiffs allege that UnitedHealth relied on the AI platform, without human input, to deny or shorten coverage, and assert breach of contract and unfair trade practices.
According to the allegations, AI-based determinations contributed to earlier-than-anticipated discharge decisions and deterioration in some patients’ conditions, including cases resulting in death. The proceedings are being followed by claims, legal, and compliance teams assessing the level of human oversight required when AI tools are used in benefit-duration decisions. These US cases highlight potential legal and operational issues around model governance, documentation, and escalation to human review.
Amid growing legal scrutiny, new research suggests that while consumers see clear potential advantages in AI use by insurers, their confidence in large-scale deployment remains mixed. GlobalData’s 2024 Emerging Trends Insurance Consumer Survey found that 73.8% of respondents believe AI can reduce waiting times to speak with insurance agents. A slightly smaller share, 71.5%, see potential for improvements in operational efficiency, and 71.2% regard AI as better than humans at pattern recognition.
However, the findings indicate that positive expectations about performance do not necessarily lead to strong support for extensive adoption. “Despite the positive perceptions, insurers face challenges in ensuring consumers adopt AI tools. Many consumers find that the technology is not yet sufficiently developed to be adopted at scale, eroding their trust. To overcome these trust issues, insurers must prioritize transparency in AI-driven decisions, particularly among those who perceive bias in the tools, such as providing negative claim outcomes. Some consumers will have data privacy concerns, while others will simply just prefer interacting with a human,” Beatriz Benito, lead insurance analyst at GlobalData, said.
Among users of AI-based tools, reported satisfaction is relatively high. The survey found that 74.5% of customers using insurance chatbots were either satisfied or very satisfied with their experience. Benito said AI is expected to reshape multiple parts of the insurance value chain. “Most certainly, the use of AI will transform the insurance industry in several ways and will also drive operational efficiencies and cost reductions. For instance, the availability of AI tools brings a new paradigm in that assistance or customer support can be provided 24/7, while the automation of claims processing leading to reduced settlement times, will naturally be viewed favourably by consumers,” Benito said. She added that AI’s pattern-recognition capabilities can support more precise risk assessment, pricing, and fraud detection. At the same time, “the need for the human touch and empathy in engagements continue to limit its full potential,” she said, adding that “better communication surrounding AI’s capabilities and nuances will ultimately lead to improved adoption rates.”