Short Circuit Court: AI Hallucinations in Legal Filings and How to Avoid Making Headlines
At a recent dinner with transactional lawyers, the topic of AI’s impact on legal work came up. They praised AI tools like ChatGPT and Google's Gemini for automating contract drafting and summarizing documents, freeing lawyers to focus on higher-value analysis. But when it came to litigation, their tone shifted. “How do you keep letting ChatGPT draft briefs with fake case citations?” one asked. “How many more sanctions are needed? Don’t you check your work?”
This frustration is echoed in the news. Experienced litigators regularly face sanctions for submitting court filings containing AI-generated falsehoods or “hallucinated” content. For example:
- In January 2025, a federal judge in Kohls v. Ellison barred expert testimony citing fabricated articles created by ChatGPT on AI misinformation. (The court called it “The irony.”)
- In April 2025, attorneys for Mike Lindell tried to avoid sanctions for an error-filled, AI-generated brief in a defamation case but were unsuccessful. The court imposed a monetary sanction.
- In June 2025, a Georgia appellate court vacated a trial order that cited two fictitious AI-generated cases and noted additional "hallucinated" cases cited on appeal.
With courts now issuing orders based on AI-hallucinated cases, the problem is urgent. This article explains why AI-generated false information keeps appearing in filings and offers practical steps for litigators to avoid these costly mistakes.
What Are AI “Hallucinations”?
An AI “hallucination” is fabricated content produced by large language models like ChatGPT, Claude, or Gemini when responding to prompts. These hallucinations may appear as fake cases, misleading quotes, distorted interpretations of law, or completely invented legal principles.
AI chatbots generate text by predicting the most probable sequence of words based on patterns in their training data. They do not verify facts or guarantee accuracy. This means their outputs can look plausible yet be entirely false.
The impact on court filings is growing fast. Since the first widely reported AI hallucination case in June 2023, over 150 cases involving AI-generated false pleadings have been documented — two-thirds of those in just the last six months. The problem is accelerating as AI tools become more common but remain misunderstood.
AI Is a Tool, Not a Source
Litigators must remember that generative AI is a tool, not an independent source of legal authority. A source provides verified, reliable information—like a case opinion or statute. A tool helps access or analyze those sources.
Many attorneys mistakenly believe AI must be retrieving real cases from somewhere. But AI models do not search databases or cite genuine documents; they generate text based on learned language patterns. This misunderstanding leads to blind trust in AI outputs.
Generative AI also tends to confirm what users want to hear, acting as an uncritical “yes man.” It mixes genuine legal authority with fabricated citations, making it easy to miss falsehoods when reviewing.
Under tight deadlines and budget pressures, AI seems like a convenient one-stop solution for drafting arguments and citations. This convenience, combined with misunderstanding AI’s nature, creates a perfect storm for hallucinations to slip into filings.
How to Avoid Making Headlines for AI Hallucinations
Successful use of AI in litigation depends on pairing artificial intelligence with human judgment and verification. Here are key strategies to safely incorporate AI tools:
- Accept AI is not a source or authority. Use AI as a drafting aid and research assistant only. Always verify any authority or fact it produces independently.
- Review court rules on AI disclosure. Many courts require attorneys to disclose if AI assisted in drafting. Be transparent, and consider avoiding AI for filings in jurisdictions with strict policies.
- Adopt a healthy skepticism. Don’t accept AI-generated content at face value—especially when it supports your argument perfectly. Question and cross-check outputs rigorously.
- Make verification routine and allow time for it. Verify every case, quotation, and authority. Ask AI tools to provide source copies, and build verification steps into your workflow.
These practices ensure AI’s efficiency benefits are realized without sacrificing accuracy or risking sanctions. As Indiana Magistrate Judge Mark Dinsmore noted in sanctioning a lawyer for filing AI-generated fake cases, AI is “much like a chain saw or other useful but potentially dangerous tool,” requiring “caution” and “actual intelligence” in its use.
By recognizing AI’s limitations and treating it strictly as a tool subject to human oversight, litigators can avoid embarrassing headlines and protect their clients and reputations.
For legal professionals looking to deepen their knowledge of AI’s role and safe use in law practice, courses and resources are available at Complete AI Training.
Your membership also unlocks: