AI Hallucinations in Court: How Fabricated Citations Are Undermining Legal Proceedings

Judges are frustrated with AI-generated legal filings containing fabricated quotes and false citations. Courts warn lawyers to verify every AI-produced reference or face sanctions.

Categorized in: AI News Legal
Published on: May 26, 2025
AI Hallucinations in Court: How Fabricated Citations Are Undermining Legal Proceedings

Judges Push Back Against AI-Generated Errors in Legal Filings

Judges are increasingly frustrated with AI-generated legal filings containing fabricated quotes, incorrect case references, and citations to nonexistent precedents. These “hallucinations” by AI bots are raising serious concerns about the reliability of court documents.

A recent high-profile case illustrates this problem. A major law firm representing Anthropic, an AI company, filed an expert testimony containing multiple errors from Anthropic’s chatbot, Claude. The AI produced wrong paper titles, incorrect authorship, and wording mistakes. These errors were included in a court filing in April, prompting plaintiffs—music publishers suing Anthropic for copyright infringement—to ask the federal magistrate to strike the expert’s entire testimony.

Latham & Watkins, the law firm involved, argued these mistakes were minor citation errors rather than fabrications. They called the oversight “embarrassing and unintentional” but insisted it should not discredit the expert's opinion. Magistrate Judge Susan van Keulen expressed skepticism during a May hearing, noting the significant difference between a simple citation error and AI hallucinations.

Widespread Issue Across Jurisdictions

This challenge is not isolated. Damien Charlotin, a French lawyer and data expert, has tracked 99 cases across multiple countries where AI-generated errors appeared in court filings. The actual number is likely much higher since many errors go unnoticed.

Nearly half of these cases involve pro se litigants, who often receive leniency due to inexperience. However, many involve lawyers, some filing erroneous documents well after AI’s tendency to hallucinate was widely recognized. This suggests the problem is worsening.

Legal Professionals Must Verify AI Outputs

UCLA law professor Eugene Volokh stresses that every citation generated by AI must be verified. Courts are clear that submitting filings with unverified factual assertions, including citations, violates Rule 11 of the Federal Rules of Civil Procedure. This exposes lawyers to sanctions and disciplinary actions.

Some courts now require disclosure of AI use in document preparation and certification that all references have been checked. One federal district has even banned most AI use in filings.

The Core Problem: AI Cannot Be Trusted to Generate Accurate Information

AI systems often invent details when unsure—posing risks beyond the legal field. Stanford researchers found that even advanced AI bots fail to provide verifiable sources for medical claims about 30% of the time, leading to potential harm.

Lawyers, responsible for high-stakes disputes, must be especially diligent. While AI has a legitimate role in law, professionals cannot ignore the pitfalls of unmonitored AI outputs.

Sanctions and Lessons from Early Cases

  • Mata v. Avianca (June 2023): Two lawyers were fined $5,000 for submitting a ChatGPT-generated brief citing nine fake court decisions. Despite widespread publicity, reliance on AI-generated content remains a problem.
  • Keith Ellison Case: A Stanford professor providing expert testimony on AI misinformation included fabricated citations from ChatGPT. The judge struck the entire declaration, highlighting how even AI experts can fall prey to AI errors.
  • State Farm Insurance Case: A California federal magistrate fined two law firms $31,000 for a brief containing numerous false citations created by an AI-assisted outline. The judge warned of the danger posed by accepting AI-generated citations without verification.

Why Errors Persist

AI-generated text mimics the structure and style of authentic legal citations, making fake references look legitimate. Overworked lawyers may accept these at face value. Early sanctions, often limited to small fines, may not have deterred careless reliance on AI. But the reputational damage and loss of credibility in court are severe consequences for those involved.

One lawyer involved in the Avianca case admitted he wrongly believed AI tools could not fabricate cases and, troublingly, sought confirmation of questionable citations from the AI itself.

What This Means for Legal Professionals

AI can assist with legal work but must be treated as a tool, not an authority. Every citation and factual assertion requires human verification before filing. Courts are signaling that tolerance for careless AI use is running out.

For legal professionals interested in improving their knowledge and skills in AI and its impact on law, exploring specialized training courses can be a wise step. Resources like Complete AI Training’s legal-focused courses offer valuable insights into safely integrating AI into legal workflows.

Ultimately, lawyers must remember that reliability and accuracy remain paramount. AI errors can lead to sanctions, damage to reputation, and loss of client trust. The lesson is clear: verify everything.