$10,000 fine for AI-fabricated citations puts California lawyers on notice
A California appellate panel fined Los Angeles-area attorney Amir Mostafavi $10,000 after finding that 21 of 23 quotations in his opening brief were fabricated by ChatGPT. The opinion warned that courts will not tolerate unverified citations: "No brief, pleading, motion, or any other paper filed in any court should contain any citations … that the attorney … has not personally read and verified."
The court sanctioned him for filing a frivolous appeal, violating court rules, citing fake authority, and wasting judicial resources. He told the court he did not read the AI-generated text before filing and said he used ChatGPT to "improve" his draft.
Why this matters for your practice
California's 2nd District Court of Appeal published the opinion as a warning to the bar. The Judicial Council of California has directed courts to either ban generative AI or adopt a policy by Dec. 15, and the State Bar is weighing code-of-conduct updates for AI use.
This appears to be the largest fine tied to AI fabrications by a California state court. In a separate federal case, two firms were ordered to pay $31,100 for "bogus AI-generated research," with the judge emphasizing: "Strong deterrence is needed."
What the data is signaling
Researchers report a surge in filings that cite fake cases across Australia, Canada, the U.S., and the U.K., with frequency rising from a few per month to several per day. A Stanford RegLab analysis (May 2024) found that while three out of four lawyers plan to use generative AI, some tools hallucinate in roughly one out of three queries.
Trackers have identified 50+ instances in California and 600+ nationwide, with more expected as adoption outpaces training. The risk is higher for difficult arguments (confirmation bias), overburdened counsel, and self-represented litigants; there are even recent examples of judges citing non-existent authority.
For context on ongoing research, see Stanford RegLab.
Adopt a written AI policy now
- Scope: Define allowed use cases (e.g., drafting tone/structure) and prohibited uses (legal research, case citations, fact-generation).
- Verification rule: Every citation must be personally read in a primary source and checked with a citator before filing.
- Attribution: The filing attorney owns the content. AI use does not shift responsibility.
- Documentation: Keep a verification log (source, date accessed, who checked, citator status).
- Disclosure: If a court or client requires it, state when and how AI assisted. Do not list AI as a source of legal authority.
- Access control: Limit AI tools to firm-approved systems; disable auto-insertion of citations.
- Training: Require periodic training on AI risks, hallucinations, and verification workflows.
A minimal verification workflow (use every time)
- Start with law, not AI: Outline your argument from statutes, regulations, and binding precedent.
- Drafting assist only: If you use AI, remove any citations it suggests by default.
- Primary source check: Pull the cited case or statute from a trusted database. Read it; confirm the quote and holding.
- Citator pass: Shepardize/KeyCite to confirm validity and treatment. Note any negative history.
- Find-by-quote: Search the exact quote. If it doesn't exist in authoritative databases, it doesn't exist.
- Second set of eyes: Have another attorney or paralegal run a citation audit before filing.
- Retention: Save PDFs of sources and your verification log with the matter file.
Risk factors you can control
- Time pressure: Rushed filings invite shortcuts. Build a hard stop for verification.
- Complex issues: The weaker or more novel the argument, the higher the hallucination risk.
- Tool prompts: Avoid asking AI to "find cases." Ask it to improve clarity of your text without adding citations.
- Overreliance: Treat AI output as unverified draft text, never as legal research.
Enforcement trends to anticipate
- Published opinions and monetary sanctions for fabricated authority.
- Mandatory education, temporary suspensions, or court-ordered training for violations.
- Court-by-court AI policies requiring disclosure or outright bans in filings and chambers.
Skill up your team
Many lawyers still don't know that these systems fabricate with confidence. Make AI literacy part of onboarding and CLE, with repeat drills on verification and model limits.
If your firm needs a structured path for responsible use, consider focused training on prompt practices, risk controls, and verification workflows: Responsible ChatGPT Certification.
Bottom line
Courts expect you to read and verify every authority you cite. Use AI as a drafting assistant, not a source of law. Build a policy, train your team, and enforce a no-exceptions verification workflow-before a judge enforces it for you.
Your membership also unlocks: