Alabama Law Firm Faces Sanctions After Using AI to Cite Fake Cases in Prison Abuse Defense
Alabama law firm Butler Snow faces sanctions for citing fake AI-generated case law in a prison violence lawsuit. Judge warns of stricter penalties for AI misuse in court filings.

Alabama Law Firm Faces Sanctions for Citing Fake AI-Generated Case Law in Prison Violence Lawsuit
Alabama’s ongoing legal battles over prison conditions have taken a troubling turn. Butler Snow, a law firm paid millions by the state to defend its prison system, is now under federal scrutiny for submitting court filings containing false case citations generated by artificial intelligence.
The Case Background
Frankie Johnson, an inmate at William E. Donaldson prison near Birmingham, Alabama, alleges he was stabbed roughly 20 times during multiple violent incidents between 2019 and 2020. Johnson claims prison officials failed to protect him despite repeated attacks, some occurring under the watch of correctional officers, one of whom allegedly encouraged an assailant.
In 2021, Johnson filed a lawsuit citing rampant violence, overcrowding, understaffing, and corruption within the Alabama Department of Corrections. The state attorney general’s office appointed Butler Snow to defend officials, relying heavily on William Lunsford, head of the firm's constitutional and civil rights litigation group.
AI-Generated Fake Case Citations
During the case, Butler Snow attorney Matthew Reeves, working with Lunsford, used ChatGPT to assist legal research. However, Reeves included several fabricated case citations in filings related to deposition scheduling and discovery disputes. These cases did not exist and were flagged by opposing counsel, who identified them as AI-generated fabrications.
For example, one citation referenced Kelley v City of Birmingham (2021), but the only real case with that name dated back to 1939 and involved a speeding ticket—irrelevant to the matter at hand.
Judicial Response and Potential Sanctions
U.S. District Judge Anna Manasco expressed serious concern over the incident. At a recent hearing, she emphasized that existing sanctions for AI hallucinations in legal filings have been insufficient and suggested a range of disciplinary options, including fines, mandatory education, referrals to licensing boards, or suspensions.
Attorneys from Butler Snow acknowledged the mistake and apologized. Reeves admitted to knowingly violating the firm’s policy on AI use by failing to independently verify citations before filing.
Judge Manasco granted the firm 10 days to submit a plan addressing how it will prevent future occurrences before deciding on sanctions.
Wider Context of AI in Legal Practice
This case is part of a growing wave of legal professionals encountering issues with AI-generated misinformation in court documents. Damien Charlotin, a legal researcher tracking these instances, notes an acceleration in such cases worldwide, with courts currently issuing mostly lenient penalties unless attorneys fail to take responsibility.
One notable past incident involved a Florida attorney suspended for a year for fabricated AI citations. In California, a federal judge fined a firm over $30,000 for similar misconduct recently.
Implications for Butler Snow and Alabama
Butler Snow holds several multimillion-dollar contracts defending Alabama’s prison system, including a major civil rights case brought by the U.S. Department of Justice alleging violations of the Eighth Amendment’s prohibition on cruel and unusual punishment.
Despite the controversy, the Alabama attorney general’s office remains committed to Butler Snow and Lunsford as their chosen counsel. The firm is now reviewing past filings for additional errors.
Key Takeaways for Legal Professionals
- AI tools like ChatGPT can assist research but require careful verification. Blind reliance risks professional sanctions and damages case credibility.
- Legal teams must enforce strict policies on AI usage. Clear guidelines and oversight are essential to prevent errors in filings.
- Judges are increasingly aware of AI-related risks and may impose serious penalties. Accountability remains paramount.
Lawyers using AI should maintain rigorous standards for confirming all citations and legal authorities. This case serves as a cautionary example of how technological shortcuts can backfire in high-stakes litigation.
For legal professionals interested in ethical AI use and training to avoid similar pitfalls, resources are available at Complete AI Training.