Lawyers Must Verify AI Work, Not Just Trust Efficiency Gains
Lawyers increasingly using artificial intelligence to draft briefs and legal documents face a critical problem: the tools hallucinate case citations that don't exist. Without human review, these fabricated references can end up filed in court-a practice that violates federal procedural rules.
A column in the February/March issue of Chicago Lawyer highlighted cases where attorneys relied on generative AI to write legal briefs, only to discover the AI invented non-existent cases. Courts have concluded these submissions violate Rule 11 of the Federal Rules of Civil Procedure, which requires attorneys to certify the accuracy of their filings.
The tension is real. Solo practitioners and public defenders often turn to AI tools out of necessity, facing heavy caseloads and limited resources. For them, generative AI functions as a survival mechanism rather than a luxury.
But efficiency cannot replace accuracy. Even when clients demand AI use to cut costs, attorneys remain responsible for verifying every legal citation before filing.
The Path Forward
Legal professionals argue the solution isn't to abandon AI, but to demand better tools. They should be transparent about how they work, verifiable in their outputs, and built with safeguards that reduce hallucinations.
Until those tools exist, the rule is simple: check the work. Human legal knowledge must review anything AI produces before it reaches a courtroom.
Learn more about AI for Legal professionals and how generative AI and LLM tools are being applied in practice.
Your membership also unlocks: