AI Slop in Court Filings: How to Stop Hallucinated Citations Before They Cost You
Courts are confronting a simple but dangerous problem: briefs with citations to cases that do not exist. The source is familiar-generative AI tools that "hallucinate" authority. The fallout is real: sanctions, fines, retractions, and public embarrassment.
Reports across major outlets in 2025 show the trend hasn't cooled despite warnings and standing orders. Judges are calling it out. Lawyers are getting caught. And a growing group of "vigilante" attorneys are exposing bad filings and forcing corrections.
What's Actually Happening
Generative AI tools like ChatGPT and Gemini can draft quickly and suggest citations. They can also fabricate them with convincing detail. That mix-speed plus false confidence-has seeded filings with fake authority.
In one high-profile early case, a New York lawyer submitted a brief with invented citations in Mata v. Avianca and was sanctioned. The opinion became required reading in many firms.
Read the sanction order in Mata v. Avianca on CourtListener
Why It Keeps Happening
- AI tools predict text; they don't guarantee truth. "Hallucinations" are a feature, not a bug.
- Time pressure tempts shortcuts. A fake case can slip through a busy workflow.
- Confusion about disclosure. Some lawyers still treat AI like a private research assistant.
- Weak verification protocols. If the "source of truth" is the draft itself, errors propagate.
How Judges Are Responding
Federal judges have issued standing orders on AI use, and some have admitted AI's role in flawed rulings-then tightened their own processes. Sanctions include fines, forced retractions, and published rebukes. Appellate briefs have been flagged for citing non-existent cases, sometimes blamed on clients using AI.
Coverage in major publications and legal newsletters shows a steady 2025 drumbeat: more exposure, more penalties, and louder calls for disclosure and verification.
Global Echoes and Tracking
This isn't just a U.S. issue. Australian academics and courts have warned about fake AI-generated authorities and urged safeguards. A public database maintained by legal researcher Damien Charlotin logged 100+ hallucination incidents in filings across multiple countries, with a surge in 2025.
The takeaway is clear: the risk is systemic, cross-border, and still growing.
Practical Safeguards You Can Deploy Now
Tools won't save you-process will. Adopt a verification-first workflow that treats AI output as a draft, never as a source.
Zero-Trust Research Protocol
- Prohibit "AI-only" citations. Every citation must be confirmed in a primary source or trusted database (Westlaw, Lexis, CourtListener, official reporters).
- Require a "source of record" link or citation for every authority in the draft. No exceptions.
- Spot-check quotations against the official text and confirm procedural posture, jurisdiction, and subsequent history.
- Document verification in a brief-specific checklist attached to the file (matter DMS note, not for filing).
Drafting Guardrails
- Use AI for outlines, issue spotting, and style edits-not for generating citations.
- If you ask for cases, require the tool to return only citations you paste in from verified sources.
- Ban prompts like "find cases supporting X." Instead: "Summarize these verified cases I provide."
Disclosure and Ethics
- Follow court-specific standing orders on AI use. If disclosure is required, include it.
- Update engagement letters to address AI usage, confidentiality, and verification duties.
- Train teams on competence and supervision duties implicated by AI-assisted work.
Quality Control That Catches AI Slop
- Second pair of eyes on every brief with authority-no self-certification.
- Run a citation audit: Shepardize/KeyCite all cases; validate statutes and regulations; confirm docket details.
- Flag risk signals: unfamiliar reporters, odd page numbers, missing docket info, or citations that don't appear in major databases.
Incident Response: If a Fake Citation Slips Through
- Notify supervising counsel immediately. Escalate to ethics/risk management.
- Re-verify the full brief. Expect a corrective filing or letter to the court.
- Own the error in plain language; avoid blaming "the AI." Judges care about your process.
- Update your firm policy and training based on the root cause.
Firm Policy Checklist
- Approved AI tools list and prohibited use cases.
- Mandatory verification checklist and sign-off for any filing with citations.
- Disclosure rules by jurisdiction and court.
- Logging: who used AI, for what task, and verification performed.
- Quarterly audits of a random sample of filings.
KPIs to Track
- Percentage of filings with full verification logs.
- Number of authority-related corrections post-filing.
- Time-to-detect and time-to-remediate errors.
- CLE/training completion rates for AI usage and verification.
What to Watch
Expect more courts to require disclosure, more sanctions for bad citations, and more public exposure by opposing counsel. Internationally, courts and law societies are publishing guidance and tightening expectations.
The firms that win here are boring: they write clean policies, train people well, and verify everything. Let everyone else learn the hard way.
Further Reading and Training
- Mata v. Avianca sanction order (CourtListener)
- Structured AI training by job role (Complete AI Training)
Your membership also unlocks: