Court sanctions against lawyers for AI errors top 1,200 cases as fines reach record highs

U.S. courts sanctioned lawyers over 800 times last year for AI errors in filed briefs. One Oregon attorney was hit with $109,700 in penalties - possibly a record.

Categorized in: AI News Legal
Published on: Apr 04, 2026
Court sanctions against lawyers for AI errors top 1,200 cases as fines reach record highs

Courts sanction lawyers at record pace for AI-generated errors in briefs

U.S. courts sanctioned lawyers more than 800 times last year for filing briefs containing errors produced by artificial intelligence tools. The rate continues climbing, with one federal court ordering an Oregon attorney to pay $109,700 in sanctions and costs last month - potentially a record penalty.

The surge marks a sharp shift from just two years ago. A researcher tracking global instances of AI-related court sanctions counted more than 1,200 cases total, with the U.S. accounting for roughly two-thirds of them.

The most visible case involved lawyers for MyPillow CEO Mike Lindell, who each faced $3,000 fines for submitting briefs with fictitious, AI-generated case citations. Yet the public embarrassment hasn't deterred others.

"Recently we had 10 cases from 10 different courts on a single day," says Damien Charlotin, a researcher at HEC Paris who maintains the tally.

Embarrassments reach state supreme courts

Nebraska's high court grilled Omaha attorney Greg Lake in February over a brief citing nonexistent cases. He claimed he'd uploaded a working draft from a malfunctioning computer and denied using AI. The justices referred him for discipline anyway.

Georgia's Supreme Court encountered a similar situation the following month. Both cases underscored how widespread the problem has become among lawyers who should know better.

The core rule hasn't changed

Lawyers remain responsible for the accuracy of everything they file - regardless of how it was created. That principle predates AI by decades.

"Whatever the generative AI tool gives you, you have to read those cases," says Carla Wale, associate dean of information and technology at the University of Washington School of Law. "You have to read the cases to make sure what you are citing is accurate."

Wale is designing optional AI ethics training for law students. She says the professional rules are still settling, but one principle stands firm: verify everything before filing.

Labeling requirements may not stick

Some courts have imposed rules requiring lawyers to label AI-generated content with details about how it was created. The intention is to flag briefs that need extra scrutiny for hallucinations - false information AI systems confidently present as fact.

But Joe Patrice, senior editor at Above the Law, doubts these labeling rules will survive long-term. He points to a practical problem: AI is becoming embedded in nearly all legal software.

"It's going to become so integrated into how everything operates that to be diligently complying with the rule, you would have to put on everything you put out, 'Hey, this is AI assisted,' at which point it kind of becomes a useless endeavor," Patrice says.

The real concern: autonomous systems

Patrice worries less about lawyers using AI to review documents or research case law - tasks where human oversight remains practical. His concern centers on the next generation of "agentic" systems that promise to handle entire legal jobs from start to finish.

"Once you obscure those middle steps, that's where mistakes happen," he says. "Even people who are well-meaning and not lazy will lose things because they weren't involved in that process."

Pressure to cut corners will grow

As AI speeds up traditionally time-consuming work, it threatens the law firm business model built on billable hours. Lawyers may respond by shifting to project-based billing rather than hourly rates.

That shift could create dangerous pressure. If lawyers bill per outcome rather than hours spent, the temptation to accept AI's first draft - without the careful review that accuracy demands - will only increase.

"Do you slow yourself down to have that natural thinking time?" Patrice asks. "Future generations who grow up in a world where this is always a reality, do they know to stop and think the problem through?"

Skills erosion is a genuine risk

Wale shares this concern about younger lawyers losing analytical skills if they rely too heavily on AI without understanding the underlying work. But she rejects doomsday scenarios predicting AI will replace human lawyers entirely.

"I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don't," she says.

The profession itself has become a target. In March, OpenAI faced a federal lawsuit from Nippon Life Insurance Company of America in Illinois. The insurance company alleged it was targeted with frivolous legal actions by a woman receiving bad legal advice from ChatGPT. Among other claims, the suit accuses OpenAI of practicing law without a license.

OpenAI said the complaint "lacks any merit whatsoever."

For lawyers navigating this moment, the path forward remains clear: verify everything AI produces before filing. The courts will continue enforcing that rule, one sanction at a time. Consider exploring AI for Legal resources or the AI Learning Path for Paralegals to develop competency in using these tools responsibly.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)