California Judge Fines Lawyers $31,000 for Filing AI-Generated Fake Legal Research

A California judge fined two law firms $31,000 for submitting AI-generated fake legal citations in briefs. He warned against relying on AI without thorough verification.

Categorized in: AI News Writers
Published on: May 14, 2025
California Judge Fines Lawyers $31,000 for Filing AI-Generated Fake Legal Research

Judge Criticizes Lawyers for Using AI-Generated Fake Legal Research

A California judge has issued a strong rebuke to two law firms after discovering that their supplemental brief contained multiple false and misleading legal citations generated by AI. Judge Michael Wilner imposed $31,000 in sanctions, emphasizing that “no reasonably competent attorney” should rely on AI for legal research and writing without thorough verification.

False Citations Nearly Influenced Judicial Decisions

Judge Wilner explained that he initially found the authorities cited in the brief convincing and sought to learn more. However, upon checking those sources, he realized several did not exist. This raised concern that fabricated AI-generated materials might have influenced judicial orders if left unchecked.

The issue began when a plaintiff’s lawyer used AI tools to draft an outline for a supplemental brief in a civil lawsuit against State Farm. This outline, containing fabricated research, was passed to another firm, K&L Gates, which incorporated it into their filing. Alarmingly, no one at either firm verified the accuracy of the citations before submission.

Deeper Investigation Reveals More Fabrications

After Judge Wilner questioned K&L Gates, the firm resubmitted the brief. Unfortunately, the revised document contained even more fake citations and quotations. This led the judge to issue an Order to Show Cause, requiring sworn statements from the lawyers involved. The lawyer who initially drafted the outline admitted to using AI tools such as Google Gemini and Westlaw Precision’s CoCounsel.

AI Misuse in Legal Documents Is Not New

This case is part of a growing trend where lawyers have been caught submitting AI-generated but inaccurate or entirely fabricated legal research. For example, Michael Cohen, former lawyer for Donald Trump, mistakenly relied on AI-generated cases that did not exist. Another instance involved lawyers suing a Colombian airline who included fictitious cases produced by ChatGPT.

Judge Wilner stressed the ethical and professional risks, stating that undisclosed AI use in drafting legal briefs is “flat-out wrong.” Sharing AI-generated drafts without disclosure puts other attorneys at risk and threatens the integrity of the legal process.

What Writers Can Learn from This

  • Always verify AI-generated content. Whether drafting legal briefs or any professional writing, double-check facts, references, and citations.
  • Disclose the use of AI tools when appropriate. Transparency builds trust and avoids ethical pitfalls.
  • Understand AI tools have limitations. They can assist creativity and efficiency but are prone to errors that require human oversight.

For writers exploring AI-assisted research or content creation, developing strong fact-checking habits is essential. If you want to improve your skills in responsibly using AI tools, consider exploring practical AI courses that emphasize accuracy and ethical use.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)