Nippon Life sues OpenAI over ChatGPT's role in reopening settled disability case

Nippon Life Insurance sued OpenAI on March 4, 2026, claiming ChatGPT helped a former claimant file dozens of meritless court documents after a settled case. The suit seeks $10.3M and an injunction blocking OpenAI from giving legal advice in Illinois.

Categorized in: AI News Legal
Published on: Mar 18, 2026
Nippon Life sues OpenAI over ChatGPT's role in reopening settled disability case

OpenAI Faces Lawsuit Over ChatGPT's Role in Generating Meritless Legal Filings

Nippon Life Insurance Company of America sued OpenAI on March 4, 2026, claiming the company's ChatGPT tool engaged in tortious interference with a contract, abuse of process, and unauthorized practice of law. The case centers on how an insurance company's former claimant used the chatbot to generate dozens of court filings after a settlement was already final.

The lawsuit is the first federal case to directly test whether AI systems can be held liable for practicing law without a license-a question that could reshape how courts treat AI-generated legal work across industries.

How the Case Unfolded

Graciela Dela Torre filed a long-term disability claim with her employer in 2021. Nippon Life, the insurer, terminated her benefits in November 2021. She sued Nippon Life in December 2022 and settled the case in January 2024, signing a release that dismissed her claims with prejudice.

A year after the settlement, Dela Torre uploaded her correspondence with her former attorney to ChatGPT and asked whether she was being gaslighted. The chatbot responded affirmatively, telling her that her attorney's communications "invalidated" her feelings and "dismissed her perspective."

Dela Torre fired her attorneys and turned to ChatGPT as her legal advisor. She used the chatbot to draft a motion to reopen her case-which the court denied on February 13, 2025. The next day, she filed a new lawsuit against another insurer, later amending it to add Nippon Life as a defendant.

Across both proceedings, Dela Torre filed 21 motions, one subpoena, and eight notices and statements-all created with ChatGPT's assistance. Nippon Life attributes at least 44 total filings to the chatbot. One motion cited a fabricated case called "Carr v. Gateway, Inc." that exists nowhere except in Dela Torre's papers and ChatGPT's output.

Three Legal Claims Against OpenAI

Tortious Interference with Contract. Nippon Life alleges that OpenAI intentionally undermined the settlement agreement by encouraging Dela Torre to breach it and pursue claims that were already dismissed. The complaint argues that ChatGPT actively sabotaged an enforceable contract by telling Dela Torre her attorney was wrong.

Abuse of Process. The company contends that generating dozens of meritless court filings constitutes abuse of the judicial system. This claim does not require proving ChatGPT practiced law-only that OpenAI's system foreseeably produced worthless filings that harmed a third party.

Unauthorized Practice of Law. This is the most novel claim. Nippon Life argues that OpenAI violated Illinois law by providing legal advice through ChatGPT. The complaint notes that despite OpenAI marketing ChatGPT's performance on the Uniform Bar Examination, the system is not admitted to practice in any U.S. jurisdiction.

What Nippon Life Seeks

  • $300,000 in compensatory damages for attorney fees and litigation costs
  • $10 million in punitive damages
  • A declaration that OpenAI violated Illinois unauthorized practice of law statutes
  • An injunction barring OpenAI from providing legal advice in Illinois

OpenAI's Marketing and Policy Decisions

Nippon Life uses OpenAI's own actions as evidence against it. In October 2024, OpenAI updated its usage policies to prohibit relying on ChatGPT for legal advice. The complaint treats this not as a defense but as proof that OpenAI recognized the foreseeable risk and responded with a terms-of-service disclaimer rather than building safeguards into the system itself.

The complaint also highlights OpenAI's marketing of ChatGPT's bar exam performance. Nippon Life frames this not as evidence of competence but as a capability claim that invited reliance-without the professional structure that would make that reliance safe.

Why This Case Matters

The lawsuit raises a core question: When does an AI system's output cross from providing general information to functioning as a licensed professional service? Unauthorized practice of law rules exist to protect the public from incompetent non-lawyers. The question before the Northern District of Illinois is whether ChatGPT crossed that line when it told Dela Torre her attorney was wrong about a binding contract.

Dela Torre clearly trusted ChatGPT's legal conclusions. She acted on them, fired her lawyers, and filed court documents based on the chatbot's advice. The system mimicked an attorney convincingly enough to be taken as one-without any professional boundaries or design constraints that would have prevented harm from incompetent representation.

OpenAI told Law360 that the complaint "lacks any merit whatsoever." As of now, no formal legal team has entered an appearance for the defendants.

The answer the Northern District of Illinois reaches will affect every industry where AI tools interact with professional licensing frameworks and regulatory requirements.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)