Fourth Circuit admonishes attorney as AI hallucinations appear in more than 800 U.S. court filings

More than 800 U.S. court cases have involved AI-generated fake citations, invented quotes, or fabricated opinions. Lawyers are using the tools without verifying sources, and courts are now requiring disclosure of AI use in filings.

Categorized in: AI News Legal
Published on: Apr 21, 2026
Fourth Circuit admonishes attorney as AI hallucinations appear in more than 800 U.S. court filings

Over 800 Court Cases Marred by AI Errors as Lawyers Struggle With Hallucinations

The Fourth Circuit Court of Appeals publicly admonished a Washington, D.C., attorney in March for submitting a brief containing citations to nonexistent cases. The court found that attorney Eric Nwaubani had violated local rules by filing documents with fabricated judicial opinions, likely generated by artificial intelligence.

The case signals a broader problem. An informal database maintained by a Paris business professor has identified more than 800 U.S. legal decisions in which generative AI produced hallucinations - instances where the technology invented case citations, false quotes, or entire judicial opinions that never existed.

Courts across the country are grappling with how to address these errors. The Fourth Circuit noted in its opinion that "nonexistent cases are the frequent posterchild for problems" as lawyers increasingly use AI tools to draft briefs and motions.

Why Hallucinations Happen

Generative AI models don't check whether information is real before presenting it. When asked to support a legal argument, the tools often fabricate citations rather than admit they cannot find support for a claim.

Iria Giuffrida, a professor at William & Mary Law School, said the issue isn't that hallucinations are becoming more common in AI output - they aren't. Instead, more lawyers are using the technology without verifying citations. "The number of hallucinations in terms of the output of the machine is pretty much the same, but there are more lawyers using it, and lawyers are failing to check the cites," Giuffrida said.

The problem deepens because AI tends to affirm what users want to hear. "The output is likely to really fascinate lawyers because they now have exactly the kind of argument that they need to win in court," Giuffrida said. "The problem is, it's not based on actual case law or statute."

Kevin Cope, a professor at the University of Virginia School of Law, offered a practical analogy: "I'd advise lawyers to treat generative AI models like an intern who's extremely knowledgeable but unreliable and way too eager to please."

Courts Demand Disclosure

Richmond's Circuit Court adopted a local rule on January 13 that requires litigants to disclose AI use in filings 10 days before trial or hearing. The rule explicitly allows AI tools but places responsibility on attorneys to verify that documents are "hallucination-free" and grounded in actual law.

Failure to check AI sources may violate Virginia Code ยง 8.01-271.1(B), which governs the signing of pleadings and motions. Courts nationwide are increasingly asking lawyers to confirm whether they used large language models and, if so, how they were deployed.

When AI Works in Legal Practice

Used correctly, AI can serve lawyers well. Beth Burgin Waller, chair of Woods Rogers' Cybersecurity and Data Privacy Practice, said she applies AI tools to summarize materials and test arguments - not to generate case law or citations.

Cope agreed that AI excels at drafting arguments based on materials lawyers provide. "They can be pretty good at drafting arguments based on primary legal sources, filings and memos, if provided with enough guidance," he said.

Giuffrida described AI as useful for identifying blind spots in legal arguments. "It's really useful in having generative AI almost like a mirror that you use to see where the blind spots are that exist in your argument," she said.

But these benefits require discipline. Every AI-generated document must be cite-checked against authoritative legal sources. Cope cautioned that lawyers should use AI "with great caution in areas of law you're less familiar with." Just as with a junior attorney, you need enough expertise to recognize when something is probably wrong.

Teaching the Next Generation

Law schools are beginning to address AI use directly. Giuffrida's course on AI and the law, launched at William & Mary in 2018, now requires students to use generative AI tools to draft papers and reflect on the process.

Students must complete an AI disclosure statement explaining what tools they used and in what context. "I want my students to understand in the safety of a course the strengths and weaknesses of these tools," Giuffrida said.

She emphasized that no expert advises attorneys to avoid AI entirely. The responsibility instead falls on lawyers to remain educated and to verify everything before filing. "The responsibility is for us lawyers to make sure that we are confident that what we say in our court documents is correct and that our citations exist," Giuffrida said.

Learn more about AI for Legal professionals or explore the AI Learning Path for Paralegals to understand best practices for integrating these tools into legal work.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)