Legal Hallucination in AI May Undermine Justice
Generative AI is now part of everyday legal work: first drafts, research triage, translations, and summaries. It speeds things up, but it does not think. It predicts the next word. That gap creates risk.
When a tool predicts instead of verifies, it can fabricate sources that look real. In law, that is not a typo-it is a professional liability. Bad advice, sanctions, and damaged trust are the cost of skipping verification.
How Hallucinations Happen
Large language models are trained on massive text corpora and generate likely sequences of words. They do not check facts by default. Ask for a case, and they might invent a plausible citation with a convincing summary.
This shows up as non-existent cases, incorrect quotes, or fake pin cites. If those make it into filings or published work, the fallout lands on you and your client.
Real Consequences in Court
Courts are responding. In September 2025, the High Court of Singapore ordered counsel to personally pay S$800 in costs after a fictitious case generated by GenAI appeared in written submissions. Counsel were also directed to provide the judge's directions to their clients as a reminder of their duty to assist the court with accurate materials.
In the United States, the lesson has been public and costly. In Mata v. Avianca (2023), attorneys were sanctioned after citing non-existent decisions generated by ChatGPT. In People v. Crabill (2023), an attorney was suspended for a year and a day after filing a motion with fabricated cases and failing to alert the court upon discovering the issue.
Data You Can't Ignore
Research underscores the risk. A 2024 study (Dahl et al.) reported legal hallucination rates between 69% and 88% across GPT-3.5, Llama 2, and PaLM 2 when tested against more than 200,000 legal queries. Another paper (Magesh et al., 2025) found hallucinations between 17% and 33% across leading AI legal research tools.
Translation: the error rate is unacceptable without a strong verification layer.
Minimum Professional Standard: Verify Everything
- Never cite AI-sourced cases, statutes, or quotes without confirming them in trusted databases (official reporters, court websites, Westlaw, Lexis, or equivalent).
- Run a citator check every time. If it is not in a recognized database, it does not exist for your purposes.
- Keep AI out of sourcing authority. Use it for drafting around verified sources or for summarizing documents you already have.
- If you discover a fake cite after filing, inform the court and opposing counsel immediately and correct the record.
Firm Policy That Actually Works
- Disclosure: Define when to disclose AI use to clients and courts. Some courts already require it.
- Permitted uses: Limit AI to brainstorming, formatting, translations, and summarizing provided materials. No authority generation.
- Mandatory verification: Human review plus database verification before any AI-assisted content leaves the firm.
- Source control: Prefer closed-corpus workflows-feed the model your documents and ask it to work only from those.
- Audit trail: Log prompts, outputs, and who verified what. Treat AI like a junior with no bar card.
- Training: Brief your team on risks and the firm's process. Reinforce consequences for skipping checks.
Prompting Rules That Reduce Risk
- Ban open-ended authority requests. Instead: "Do not invent cases. If uncertain, say 'unknown.'"
- Ask for issues, arguments, or outlines-not citations. Add sources only after you verify them.
- When summarizing, provide the exact text or PDF and instruct the tool to reference only that content.
If a Fake Citation Slips Through
- Verify immediately using official sources. Document the check.
- If it is fake, notify the court and opposing counsel, withdraw or amend the filing, and explain your correction steps.
- Conduct an internal post-mortem: where the control failed, what to change, who signs off next time.
Relevant Guidance and Resources
See the Bar Council Malaysia's circular on risks and precautions for using generative AI in legal practice. It directs lawyers to verify AI outputs against traditional legal databases. Read the circular.
Upskilling Your Team
If your firm needs practical AI skills with guardrails, explore role-based training options that emphasize verification-first workflows. Browse courses by job.
Bottom Line
GenAI can speed up the grind, but it cannot replace legal judgment or verified sources. Treat AI output as a draft, never an authority. Your license-and your client's case-depends on it.
Your membership also unlocks: