Over 1,200 AI Hallucinations Have Made Their Way Into Legal Filings
Since the first AI-generated case citation appeared in a court brief in 2023, hallucinations have become a recurring problem in legal work product. Last week, a Sixth District Court of Appeals brief contained real citations but quoted sentences that don't exist in the cited sources. The attorney used a reputable legal research vendor's AI tool.
The problem extends beyond court filings. In 2025, a pro se litigant filed dozens of documents allegedly using ChatGPT, many containing hallucinated cases. Her insurance company's opponent, Nippon Insurance, sued OpenAI, claiming the company engaged in unauthorized practice of law.
These errors matter because hallucinations are a feature of large language models, not a flaw that will disappear. They are convincing enough to pass initial review.
The Accounting Industry Offers A Blueprint
The legal profession faces a problem the accounting industry solved decades ago. In 2002, MCI WorldCom capitalized operating expenses as investments, misleading investors with false numbers. The financials were analogous to AI hallucinations - not real, but persuasive.
Accounting responded with regulation. After MCI WorldCom and Enron, Congress passed Sarbanes-Oxley. The industry moved from self-regulation to formal oversight. Auditors now review both the numbers and the processes that produce them. Internal auditors serve as checks before external review.
Legal should follow the same path. Trust in financial markets depends on validated numbers. Trust in the justice system depends on validated documents.
Law Firms Need Stronger Review Processes Now
Citation verification using Shepardize or KeyCite is no longer optional. Firms should treat it as baseline.
Beyond that, firms need to build review processes that scale with AI-generated output. As agentic solutions and client pressure increase, the volume of AI-assisted work will grow tenfold or more. Human review alone cannot keep pace.
Specific steps firms should consider:
- Implement independent systems designed to catch citation errors and hallucinations
- Use adversarial AI approaches - a second model tasked with disproving the first
- Establish sampling protocols for high-volume work like mass torts or e-discovery summaries
- Create document-level confidence scoring
- Tie human sign-off to defined review thresholds
- Consider an internal audit function structurally separate from practice areas
Staff training must be ongoing. Clear protocols should define who reviews, when reviews happen, and against what standard.
The Bar Association Should Provide Operational Guidance
The American Bar Association has issued initial guidance on professional standards for AI under Rule 11. Lawyers understand their responsibility, but lack formal operational guidance on how to meet those standards.
The accounting profession has GAAP - Generally Accepted Accounting Principles - to guide financial reporting. The ABA could offer similar guidance on operationalizing Rule 11.
Industry collaboration through existing forums like the SALI Alliance could develop best practices. But if AI-generated errors undermine trust in legal documents, bar associations and regulators may need to step in sooner rather than later.
Pro Se Litigants Require Special Consideration
Courts should consider minimum standards before pro se litigants can file documents created with AI. A disclosure requirement - stating that AI was used - would help opposing counsel and judges understand the source of potential errors.
Some courts could offer services to pro se litigants to validate filings before submission. This would reduce frivolous hallucinations while supporting access to justice.
The Core Problem: Scale Breaks Traditional Validation
The legal system was engineered for accuracy at human drafting speeds. AI breaks that balance by automating content creation at a scale that overwhelms traditional review methods.
The solution is not to abandon AI. It is to automate validation to match the speed of creation. AI for Legal professionals must include robust review infrastructure.
Contracts and legal advice to clients face the same hallucination risks as court filings. Firms that ignore this risk expose themselves and their clients to liability.
Just as investors need confidence in financial reporting, the legal system needs confidence that hallucinations are managed when AI is part of work-product creation. The review of AI-generated work product is now the greatest systemic limitation the legal industry faces in AI adoption.
Your membership also unlocks: