AI Hallucinations Are Now a Courtroom Risk: What Legal Teams Must Do Next
Federal judges in Arizona just flagged a brief "replete with citation-related deficiencies" - 12 of 19 cases cited were fabricated, misleading, or unsupported. The sanctions order, issued by U.S. District Judge Alison Bachus, underscores a simple point: unchecked generative AI in legal work creates professional and ethical exposure.
This is no one-off. A research database tracking "AI hallucination" filings shows a surge since late 2024, with Arizona among the top jurisdictions in the U.S., trailing only the Southern District of Florida. The U.S. accounts for the majority of the 486 tracked cases worldwide.
The trend by the numbers
- Arizona: at least six federal filings since September 2024 include fabricated AI material.
- U.S. total: 324 federal, state, and tribal cases; 486 globally.
- Who's filing: 189 by pro se litigants - but also 128 lawyers and two judges.
As one legal researcher summarized, there are now "hundreds" of instances where lawyers, experts, and even judges filed materials containing hallucinated citations.
Sanctions are real - and public
In Arizona (Mavy v. Commissioner of Social Security Administration), Judge Bachus ordered the sanctioned attorney to notify the federal judges whose names appeared on the fabricated opinions and to provide the sanctions order in future cases where she appears. Her permission to appear pro hac vice was revoked, and the court notified her home-state bar.
Other courts are taking an equally firm line. A judge in South Florida sanctioned a lawyer for "false, fake, non-existent, AI-generated legal authorities" across eight cases and required attaching the sanctions order to future filings for two years. A New York federal judge levied a $5,000 fine in 2023 for a ChatGPT-generated brief full of fictitious cases. Colorado imposed a 90-day suspension in a disciplinary matter where an attorney denied AI use despite messages indicating otherwise.
Judges and chambers are not immune
AI errors have also surfaced in judicial orders. A federal judge in Mississippi acknowledged an early draft order included false statements generated by Perplexity; it was retracted quickly. Another judge in New Jersey withdrew an opinion after counsel pointed out fabricated quotes and misstated holdings.
Arizona's judiciary has issued guidance reminding the bar that lawyers must verify AI-generated research before submitting work to courts or clients. The Arizona Supreme Court's committee guidance is direct: judges and attorneys - not AI tools - are responsible for final work product.
Why this matters for case law and client trust
Courts rely on precedent and accurate citations. As one law professor warned, hallucinated opinions can mislead judges and clients, and may even show up in orders that sway real disputes. A federal court in California called that possibility "scary." If fake cases become prevalent and effective, they erode confidence in judicial decisions.
Even outside the courtroom, AI's accuracy is inconsistent. SCOTUSblog tested a popular model on 50 Supreme Court questions in 2023 and got fewer than half correct. In his year-end report that same year, the Chief Justice cautioned that any use of AI demands humility - and that citing non-existent cases is always a bad idea.
Practical workflow: reduce your exposure to AI hallucinations
- Adopt a "trust but verify" policy: treat all generative outputs as unverified until confirmed with primary sources.
- Validate every citation: use KeyCite/Shepard's, confirm reporter citations and docket numbers, and read the full opinion - not just a summary.
- Cross-check in multiple systems: Westlaw, Lexis, Fastcase/Casemaker, and official court sites; pull PACER dockets for contested or obscure references.
- Scrutinize quotes and holdings: compare block quotes to the official PDF; confirm that the cited proposition matches the actual holding.
- Watch for telltales: odd reporter abbreviations, mismatched judge initials, impossible dates, nonstandard case numbers, and citations that appear nowhere else.
- Use retrieval-based prompts: require the model to cite only from documents you supply (opinions, statutes, rules) and to provide pin cites and URLs/DOIs.
- Log AI use in your file: model name/version, prompts, system settings, and human verification steps. Preserve search trails for possible court inquiry.
- Supervise nonlawyers: reinforce Model Rules 5.1/5.3. If a paralegal or junior uses AI, the responsible attorney verifies every line.
- Follow local rules and standing orders: if a judge requires disclosure of AI assistance, comply and describe your verification process.
- Set firmwide guardrails: define approved tools, banned tasks (e.g., drafting citations without sources), and mandatory checks before filing.
Fast screen: how to spot a fake citation in minutes
- Look up the case in at least two databases; if it doesn't appear in either, treat it as suspect.
- Confirm the reporter, volume, page, year, and court - and that the judge assigned to the panel or division makes sense.
- Pull the official PDF from the court or PACER if available; verify the quote and pin cite against the original pages.
- Run citators to see if the case exists, is good law, and actually supports the stated proposition.
Ethics, duties, and disclosure
Generative AI does not change your duties under the rules of professional conduct. Competence, candor, and supervision still apply. Treat hallucinations as your problem, not the tool's.
- Model Rule 1.1 (competence) and 3.3 (candor to the tribunal) are implicated when fabricated authority slips into a filing.
- Model Rules 5.1 and 5.3 require meaningful supervision of lawyers and nonlawyers who touch the work product.
ABA Model Rules of Professional Conduct
For docket verification and original filings, use official court sources and PACER.
PACER: Public Access to Court Electronic Records
What's next
Researchers tracking these incidents report the pace has picked up - from a couple of cases a week to multiple per day. The silver lining: the scrutiny is flushing out sloppy habits that predate AI and forcing firms to tighten their workflows.
Skill up your team
If your firm is formalizing AI policies, training lawyers and staff on verification-first workflows pays off. Here's a curated list of AI learning paths organized by job role.
Complete AI Training: Courses by Job
Bottom line: use AI for speed, but never outsource judgment. Every citation, quote, and holding still needs a human who reads the case and stands behind it.
Your membership also unlocks:
 
             
             
                            
                            
                           