AI-Hallucinated Citation Incident Involving Prominent Alabama Law Firm
In a recent filing related to Johnson v. Dunn (N.D. Ala.), Butler Snow, a well-known Alabama law firm, addressed concerns raised by Judge Anna Manasco over apparently fabricated legal citations in court motions. The judge ordered an explanation after discovering that some citations cited by the firm appeared to be "hallucinated" by an AI system.
Butler Snow acknowledged that the use of generative AI, specifically ChatGPT, played a role in the issue. Counsel admitted to using AI to supplement legal research but failed to verify the accuracy of the citations provided. These AI-generated sources either do not exist or do not support the legal propositions claimed.
Details of the Incident
The problem arose when Mr. Reeves, a partner and assistant practice group leader at Butler Snow, used ChatGPT while revising a motion. Seeking legal authority for a well-established legal principle, he incorporated AI-generated citations directly into the motion without checking their validity. He has since admitted that these citations were false but did not intend to mislead the Court or opposing counsel.
Mr. Reeves passed the draft to Mr. Cranford, another attorney at the firm, who finalized and filed the motion unaware of the false citations or the use of AI for research. A similar process occurred with another motion referenced in the plaintiff’s response.
Firm's Response and Measures
- Butler Snow emphasized that since 2023, it has cautioned attorneys about the risks of using large language models like ChatGPT for legal research and stressed the need to verify every citation.
- The firm has an Artificial Intelligence Committee currently drafting a comprehensive AI policy to guide ethical and professional use of AI tools.
- Following the Court's Show Cause Order, the firm sent additional reminders reinforcing attorneys’ duties to verify legal authorities.
- Extensive new training on AI's appropriate use in legal representation is planned for all firm attorneys.
Butler Snow did not dispute the Court’s authority to impose sanctions under Rule 11 but requested that any sanctions be proportionate to each attorney’s role. The firm also asked that its client not be sanctioned, emphasizing the client’s lack of involvement or knowledge of the AI-related errors.
Accountability and Moving Forward
The firm expressed sincere apologies to the Court, opposing counsel, and all parties involved. It recognized the lapse in judgment in relying on ChatGPT-generated legal authorities without verification, despite firm policies advising against such practices.
Butler Snow committed to implementing stronger internal controls, training, and policies to prevent similar incidents. The firm also requested permission to file amended motions with accurate citations to correct the record without prejudicing their client.
Implications for Legal Professionals
This incident serves as a cautionary example of the risks involved when using AI tools like ChatGPT for legal research without thorough verification. While AI can offer convenience, it is critical for legal professionals to confirm the authenticity and applicability of any citations or authority derived through these tools.
Law firms should develop clear guidelines and training programs addressing AI’s role in legal work to maintain ethical standards and uphold the integrity of legal proceedings.
For legal professionals interested in learning more about responsible AI usage and related training, resources are available at Complete AI Training – ChatGPT Resources.
Your membership also unlocks: