Maryland Appellate Court Flags Fake AI Citations - And Sends a Clear Warning
Artificial intelligence just triggered another courtroom mess. In a Maryland custody appeal, a family lawyer filed a brief that included AI-invented case law and citations that undercut his own arguments.
The lawyer said a law clerk used ChatGPT to "find" citations and help edit the brief. He didn't verify the cases before filing. The court made it clear: that's not competent representation.
What the Court Said
Judge Kathryn Grill Graeff didn't mince words: "It is unquestionably improper for an attorney to submit a brief with fake cases generated by AI." She added that counsel "did not read the cases cited," and that this "does not satisfy the requirement of competent representation."
The court required the attorney to accept responsibility, complete education on the ethical use of AI, implement firm-wide verification protocols, and referred him to the Attorney Grievance Commission for potential discipline. The clerk also had to complete training. The court noted this is the first Maryland appellate case addressing AI misuse - and likely not the last.
What Actually Went Wrong
- AI hallucinated case law was treated as authority.
- Some real citations contradicted the brief's position.
- No primary-source verification before filing.
- Delegation to a non-lawyer with no oversight.
Practical Steps for Legal Teams
- Ban AI tools from generating or selecting legal authorities. Use AI for drafting structure or language only - never for citations or case selection.
- Require a primary-source check for every citation (official reporters, court websites, or trusted databases). Read the full opinion, not just a summary.
- Verify the holding and fit. Add pin cites. Confirm jurisdiction and procedural posture. Ensure nothing in the opinion cuts against your argument.
- Run negative treatment checks (KeyCite/Shepardize). Log who reviewed what and when.
- Adopt an "attorney-of-record verification" rule: the filing lawyer personally signs off after reviewing each cited authority.
- Train every clerk and associate on AI risks, hallucinations, and citation standards. Make this part of onboarding and annual training.
- Check local rules and standing orders for any AI disclosure or certification requirements before filing.
- Audit your briefs quarterly for citation accuracy and workflow compliance.
Ethics and Risk
Competence includes understanding relevant technology and its limits. If you use AI, you own the output - and the consequences. Don't delegate judgment to a tool that fabricates facts with confidence.
Review Model Rule 1.1 and related commentary on technological competence for a simple benchmark on what's expected of practitioners.
ABA Model Rule 1.1: Competence
A Simple AI Policy You Can Implement This Week
- Scope: AI may assist with brainstorming, outlines, and plain-language edits. It may not source or summarize authorities without human validation.
- Citations: Every citation must be verified in a primary source. No exceptions.
- Attribution: Track who generated AI content, which tool was used, and who performed verification.
- Approval: Filing attorney signs a one-page checklist confirming reviews were done.
- Security: Don't paste client-confidential information into public tools. Use approved, enterprise accounts with data controls.
Bottom Line
AI can speed up parts of your workflow, but it can't think like a lawyer or accept sanctions for you. Read the cases. Verify the citations. Document your process. That's how you protect your client - and your license.
If your team needs structured, practical training on safe AI use in legal workflows, explore curated options here:
AI Courses by Job | ChatGPT Courses and Guides
Your membership also unlocks: