Why Banning AI in Court Is the Wrong Fix for Fake Case Citations
Accountability – not prohibition – is the proper approach in the age of AI-generated errors
The case Zhang v. Chen, 2024 BCSC 285 was Canada’s first reported decision involving fake case citations generated by ChatGPT. Lawyer Fraser MacLean, who successfully challenged the false citations, pointed out: “The problem isn’t that AI was used. It’s that lawyers submitted citations to the court that were completely fabricated.”
Similarly, in R. v. Chand, 2025 ONCJ 282, the Ontario Court of Justice banned generative AI for legal research after numerous erroneous citations appeared in defence submissions. These incidents serve as warnings but banning AI is not the right answer. Instead, the focus should be on verification.
The Case for Regulation Over Bans
Legal professionals must remain responsible for the accuracy of their work, no matter the tools used. Whether the source is a junior associate, a legal database, or an AI model, submitting inaccurate or false citations violates ethical obligations.
Legal ethicist Professor Amy Salyzyn argues that regulators should adapt existing rules rather than ban AI. Her recommendations include:
- Requiring lawyers to understand and use “relevant” technologies that are “reasonably available.”
- Making “reasonable efforts” to prevent unauthorized disclosure of client data.
- Taking “reasonable steps” to ensure any legal tech aligns with ethical duties.
MacLean adds that when AI-generated content is filed in court, lawyers should disclose what was created with AI, specify its purpose, and confirm it was personally reviewed and verified for accuracy.
Ethical Tools and Smart Policies
MacLean has tested legal-specific AI tools like Alexi, which rely on closed, verified databases. Unlike general AI tools such as ChatGPT, Alexi connects directly to real cases and avoids hallucinations. Still, human oversight remains critical. “The summaries can sometimes oversell the point. You always have to check the case,” he notes.
Banning AI in law firms could backfire. Lawyers might use AI covertly on personal devices or outside firm systems, creating cybersecurity and oversight risks. Instead, firms should adopt internal AI-use protocols, such as:
- Requiring printouts of any cited case’s first page.
- Providing AI training focused on detecting hallucinated content.
- Reviewing all AI-assisted outputs, including marketing materials.
- Encouraging transparency and secure AI use within firm infrastructure.
What Courts and Firms Should Really Do
Disclosure requirements may be appropriate in limited cases—such as when AI-generated summaries are submitted in formal pleadings—but only when paired with a lawyer’s certification of accuracy. Blanket bans or mandatory disclosures for every AI use, even routine tools like spellcheck or grammar correction, are unnecessary.
The focus must remain on results and accountability, not on the tool itself. In Zhang, MacLean highlighted the risks of unchecked AI use: flawed judicial reasoning, wasted resources, and damage to the profession’s reputation. But banning AI is like banning planes after early crashes—it’s not the answer.
Toward a Trusted AI Legal Future
General AI tools such as ChatGPT are not suited for legal research. However, legal-specific platforms trained on reliable, permissioned data can improve efficiency and consistency in the legal system. AI adoption is growing, especially among in-house legal teams.
Courts and regulators should support responsible innovation by setting clear rules, enforcing verification protocols, and maintaining ethical safeguards. Reactionary bans miss the point. The real risk lies not in the technology—but in how it’s used. The key is building professional structures that promote wise and accountable AI use.
Your membership also unlocks: