Judge Rebukes Lawyer For Using AI-Crafted Citations-Then Defending Them With More AI
A New York Supreme Court case took an unusual turn when defense counsel was accused of filing a brief with fabricated citations and quotes that appeared to be AI-generated. When challenged, the lawyer doubled down-submitting an opposition that the judge said also relied on AI and contained even more mis-citations.
As the judge put it: "Counsel relied upon unvetted AI-in his telling, via inadequately supervised colleagues-to defend his use of unvetted AI." That line sums up the risk: AI without verification is a liability, not a shortcut.
What actually happened
The underlying dispute was routine: a family loan gone bad. The controversy stemmed from the defense brief, which included supposed "paraphrases" with bracketed edits, "citation omitted" notes, and references that either did not support the propositions, addressed unrelated subjects, or in one case did not exist.
After being called out, defense counsel filed an opposition. According to the court, it was worse: more than double the mis-cites, including four non-existent citations and multiple quotes that do not appear in the cited cases.
Why this matters to practicing lawyers
This is a professional responsibility issue before it is a technology issue. Duties of candor, competence, and reasonable investigation apply regardless of the drafting tools you use.
Technology competence now includes understanding AI's failure modes and implementing controls. See ABA Model Rule 1.1 Comment 8 on technology competence: ABA Model Rule 1.1.
Courts have sanctioned AI misuse before. See the sanctions docket in Mata v. Avianca (S.D.N.Y.): CourtListener docket.
AI failure modes that trigger sanctions
- Hallucinated citations: sources that do not exist, or exist but say something else.
- Misquoting: quotation marks around language not found in the source.
- Jurisdiction drift: citing the wrong court or using inapplicable standards.
- Outdated or overruled authority presented as good law.
- Fabricated "signals" and parentheticals ("citation omitted," bracketed edits) that signal authenticity but are invented.
- Overconfident tone that masks uncertainty and reduces diligence.
Courtroom-safe AI protocol (use this)
- Set the rule: AI may assist with brainstorming and structure. It does not originate authorities. Humans do.
- Source-first research: Pull law from primary databases, reporters, or court websites before any drafting.
- No blind citations: Every citation must be opened, read, and saved as a PDF with highlight/notes.
- Quote verification: Character-for-character check of every quoted string against the source. No exceptions.
- Cite-check: Shepardize/KeyCite for every case and statute. Record status in a research log.
- Proposition support: For each assertion of law, list the exact page/paragraph that supports it.
- AI disclosure policy: If AI touched the draft, certify that all law and quotes were independently verified and provide the verification log if asked.
- Version control: Store sources, logs, and drafts in the matter file. Keep an audit trail.
- Human sign-off: Supervising attorney certifies the filing after spot-checking randomly selected citations and all key authorities.
Fast audit for filings already on the docket
- High-risk markers: scan for bracketed alterations in "paraphrases," unexplained "citation omitted," and quotes without pincites.
- Sampling: manually open and verify each quoted passage and a random 30-50% of remaining cites.
- Triangulate: cross-check key authorities across two services or against the court's own site.
- If defects appear: promptly file a corrected brief and a short notice explaining the correction-not an excuse.
What to file if AI was used
Keep it simple: "AI-assisted drafting tools were used for outline and language suggestions. All legal authorities and quotations were independently located, verified, and checked for current validity by counsel. Counsel assumes full responsibility for all content." Then make sure that is true.
Firm policy that prevents this mess
- Written AI policy: permitted uses, forbidden uses, verification standards, and approval workflow.
- Confidentiality guardrails: block uploads of client data to tools that do not offer appropriate privacy terms.
- Training: teach failure modes, verification habits, and disclosure norms. Test with real cite-check drills.
- Vendor vetting: prefer tools with retrieval from actual databases and transparent citations over pure text generators.
- Incident response: steps to correct a filing, notify the court if needed, and retrain involved staff.
Bottom line
AI can draft sentences. It cannot carry your duty of candor. Use it for speed where it's safe, but build a verification wall between anything AI suggests and anything you file.
If your team needs structured upskilling, consider role-based programs that emphasize verification and ethics: AI courses by job.
Your membership also unlocks: