Court Sanctions Over AI Errors Keep Rising as Lawyers Ignore Early Warnings
Lawyers continue adopting AI tools despite mounting court penalties for filing briefs containing AI-generated errors. Last year saw a sharp increase in sanctions, with over 1,200 cases documented globally-about 800 from U.S. courts-and the rate is still climbing.
The most visible early warning came in 2023 when lawyers for MyPillow CEO Mike Lindell were each fined $3,000 for submitting briefs with fictitious, AI-generated case citations. The cautionary tale had limited effect.
A federal court in Oregon set a recent penalty record, ordering a lawyer to pay $109,700 in sanctions and costs for filing AI-generated errors. Damien Charlotin, a researcher at HEC Paris who tracks these cases, said he counted 10 sanctions from 10 different courts on a single day last month.
"We have this issue because AI is just too good - but not perfect," Charlotin said.
High-profile embarrassments have extended to state supreme courts. Nebraska's Supreme Court grilled attorney Greg Lake in February over a brief containing citations of fictitious cases. He claimed he'd mistakenly uploaded a working draft from a malfunctioning computer and denied using AI. The court referred him for discipline anyway. Georgia's Supreme Court had a similar encounter the same month.
The Rule Hasn't Changed
Lawyers remain legally responsible for the accuracy of their filings, regardless of how those documents were generated. This long-standing rule hasn't been modified to account for AI for Legal work.
"Whatever the generative AI tool gives you - as in, 'Look at these cases' - you, under the rules of professional conduct, you have to read those cases," said Carla Wale, associate dean of information and technology at the University of Washington School of Law. "You have to read the cases to make sure what you are citing is accurate."
Some courts have imposed additional requirements. They now require lawyers to label anything produced with AI and provide details about how it was used. The goal is to flag briefs that need extra scrutiny for hallucinations-false information the AI generates confidently.
Labeling Rules May Become Unworkable
Joe Patrice, senior editor at Above the Law and a former lawyer, questions whether labeling rules will survive as AI becomes embedded in standard legal software.
"It's going to become so integrated into how everything operates that to be diligently complying with the rule, you would have to put on everything you put out, 'Hey, this is AI assisted,' at which point it kind of becomes a useless endeavor," Patrice said.
He acknowledges AI's genuine value for combing through large volumes of evidence or case law. His concern centers on the next generation of "agentic" systems-tools that handle legal jobs from start to finish with minimal human oversight.
"Once you obscure those middle steps, that's where mistakes happen. And even people who are well-meaning and not lazy will lose things because they weren't involved in that process," he said.
Billing Models May Accelerate Risk
The traditional law firm business model relies on billable hours. As AI speeds up certain tasks, that model faces pressure. Lawyers may shift to billing per deliverable rather than per hour-a change that could intensify time pressure and tempt attorneys to accept AI drafts without thorough review.
"And then it's a real question: Do you slow yourself down to have that natural thinking time?" Patrice asked. "Future generations who grow up in a world where this is always a reality, do they know to stop and think the problem through? And that's a worry."
Wale shares concerns about eroding analytical skills but rejects predictions of fully automated legal systems. "I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don't," she said.
AI Itself Faces Legal Scrutiny
In March, Nippon Life Insurance Company of America sued ChatGPT maker OpenAI in federal court in Illinois. The insurance company alleged it was targeted with frivolous lawsuits by a woman receiving bad legal advice from the chatbot. The suit accuses OpenAI of practicing law without a license.
OpenAI responded in a written statement: "This complaint lacks any merit whatsoever."
Wale is developing optional AI ethics training for law students interested in the subject. But she noted that professional consensus on AI use remains incomplete.
"I don't think there is a consensus beyond, 'You have to make sure it's correct.' And so for us, that is the baseline," she said.
Your membership also unlocks: