Utah Lawyers Sanctioned After AI-Generated Fake Cases Cited in Court Brief

Utah attorneys were sanctioned for citing fabricated AI-generated cases in court filings. The court emphasized that lawyers must verify all sources to maintain justice integrity.

Categorized in: AI News Legal
Published on: Jun 05, 2025
Utah Lawyers Sanctioned After AI-Generated Fake Cases Cited in Court Brief

Utah Attorneys Sanctioned for Citing Fake AI-Generated Cases

KEY TAKEAWAYS
Utah attorneys were sanctioned after citing fictitious cases generated by artificial intelligence in a court brief. The Utah Court of Appeals stressed the importance of verifying sources, highlighting AI’s potential to produce errors. The attorneys were ordered to cover opposing counsel’s fees and refund their client.

SALT LAKE CITY — For the first time, Utah courts have addressed the use of AI in preparing legal documents, following a sanction against attorneys who cited non-existent cases likely created by ChatGPT. The court made clear that while using reliable AI tools is not inherently improper, attorneys must ensure all filings are accurate under the Utah Rules of Civil Procedure.

"The legal profession must be cautious of AI due to its tendency to hallucinate information," the appellate judges stated. Attorneys have a duty to verify every source, and in this case, the lawyers "fell short of their gatekeeping responsibilities as members of the Utah State Bar" by relying on fabricated cases.

The AI-Generated Petition

Richard Bednar and Douglas Durban, representing Matthew Garner, filed a petition for the appellate court to review a 3rd District Court decision. Opposing counsel discovered that the petition cited multiple cases that either did not exist or were irrelevant. Some references appeared to come from ChatGPT, including at least one case found only through AI search.

Opposing attorneys pointed out that several citations were unrelated to the dispute. As a result, Bednar was ordered to pay the opposing party’s legal fees for responding to the petition, refund his client for costs related to filing it, and pay $1,000 to “Justice for All,” a Utah nonprofit supporting equal access to justice.

The Court of Appeals reviewed sanctions from other jurisdictions addressing "hallucinated authority" and concluded similar penalties were appropriate in this instance.

A hearing was held to allow the attorneys to argue against sanctions. Before that, Bednar and Durban apologized for the errors and agreed to cover the opposing attorneys’ fees. Their attorney explained that a law clerk had used ChatGPT to draft the document, but Bednar had assumed the citations were verified. Bednar accepted responsibility after learning about the AI use.

Court’s Ruling and Implications

On May 22, the appellate court acknowledged the attorneys’ acceptance of responsibility but emphasized that their lack of care constituted an abuse of the justice system and caused harm. Extra resources were spent by opposing counsel and the court to address the issue, delaying other matters.

The court stressed that opposing counsel and judges should not be tasked with independently verifying every citation’s validity. "This court takes the submission of fake precedent seriously," the opinion stated. "Our system of justice must be able to rely on attorneys complying with their duty."

Case Background

The underlying case involves Matthew Garner’s contract dispute against Kadince, a software company based in North Ogden. Garner, a former shareholder and chief experience officer, alleged that in 2019 he was forced to give up the majority of his shares under threat of termination, violating his employment agreement.

Garner filed suit in February 2020, claiming breaches of contract. Kadince denied the allegations, asked for dismissal, and counterclaimed that Garner breached his contract by filing the lawsuit.

The appellate court ultimately denied both Garner’s petition for appeal—which contained the faulty AI-generated citations—and the attorneys’ request to file an amended petition. The case will proceed as if the appeal had not been filed.

Lessons for Legal Professionals

  • AI can assist in legal research but cannot replace thorough human verification.
  • Attorneys remain fully responsible for ensuring the accuracy and legitimacy of all cited authorities.
  • Failing to verify AI-generated content can lead to sanctions, financial penalties, and harm to clients' interests.
  • The legal system depends on trust in attorneys to uphold ethical and procedural standards.

As AI tools become more common, legal professionals should exercise caution and implement verification protocols when using them for research or drafting. For those interested in understanding responsible AI use in legal contexts, exploring targeted training in AI tools and prompt design may be useful. Resources like Complete AI Training’s ChatGPT courses offer practical guidance on integrating AI responsibly in legal workflows.