AI-Caught Lawyer Fined $5,500 for Submitting Fake Legal Citations Generated by ChatGPT
Using AI to assist in legal research can be tempting, but as a recent bankruptcy case in Illinois shows, relying on generative AI without thorough verification is a serious risk. Attorney Thomas Nield and the Semrad Law Firm found this out the hard way after submitting fabricated case law citations generated by ChatGPT to support their client’s repayment plan.
Judge Michael Slade, overseeing the case, discovered that the cases cited by Nield either did not support his arguments or didn’t exist at all. This led to a $5,500 fine and a mandatory AI education session for Nield and another senior attorney at Semrad.
How the Problem Unfolded
The bankruptcy case involved a debtor represented by Semrad Law Firm, with Nield as lead counsel. After filing a Chapter 13 repayment plan, the creditor, Corona Investments LLC, objected, disputing the plan’s feasibility and standing to contest it.
Nield’s response cited four precedents supposedly supporting the argument that the creditor lacked standing. But Judge Slade’s review revealed multiple discrepancies:
- In re Montoya: The quoted language didn’t appear anywhere in the opinion, and the case didn’t address standing issues.
- In re Coleman: The case was from Missouri, not Wisconsin, and the quotation was fabricated.
- In re Russell: Actually a Virginia case, with no relevant discussion on standing or the quoted language.
- In re Jager: The case simply did not exist.
Judge Slade concluded that none of the quotations cited were authentic court statements, and Nield admitted to using AI—specifically ChatGPT—to generate that part of his brief.
Why AI Should Not Be Trusted Blindly for Legal Research
Despite the promising prose AI can produce, Judge Slade emphasized the inherent limitations of generative AI tools in legal research. ChatGPT does not access official legal databases like Westlaw or LexisNexis, nor does it analyze cases for relevance or draft properly formatted citations.
“Any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud,” Slade wrote. He made it clear that AI-generated research must be verified independently before use.
Nield acknowledged his mistake, stating he had never before used AI for legal research and had trusted the AI not to fabricate citations. He expressed remorse and promised to never rely on AI outputs without full verification.
Consequences and Educational Measures
Though Semrad and Nield voluntarily admitted misconduct, withdrew compensation requests, and completed an online continuing legal education (CLE) video, Judge Slade imposed sanctions nonetheless. The $5,500 fine was described as "modest," with a warning that future infractions would incur heavier penalties.
Additionally, Nield and another senior attorney at Semrad are required to attend an in-person session titled “Smarter Than Ever: The Potential and Perils of Artificial Intelligence” during the National Conference of Bankruptcy Judges annual meeting in Chicago on September 19, 2025.
Lessons for Legal Professionals
This case serves as a cautionary tale for attorneys considering AI tools for legal research. While AI can assist with drafting or summarizing, it cannot replace the rigorous process of verifying case law and citations.
Legal professionals must maintain high standards of accuracy and due diligence. Relying blindly on AI-generated content without cross-checking sources can lead to professional discipline and damage to client interests.
For lawyers interested in responsibly integrating AI tools into their practice, proper training is essential. Courses focused on AI limitations and best practices can help avoid pitfalls like those faced by Nield and Semrad Law Firm. Explore practical AI legal training options at Complete AI Training.
Final Word
Generative AI is not yet equipped to provide reliable legal research. Attorneys must treat AI outputs as starting points—not final authority—and verify all citations and facts through trusted legal databases. Ignoring this responsibility risks serious consequences, as this recent case clearly demonstrates.
In legal practice, diligence and accuracy remain paramount. AI should be used with caution and skepticism until it can consistently meet the profession’s exacting standards.
Your membership also unlocks: