Ontario Judge Flags AI-Generated Legal Errors as Experts Warn of Growing Risks in Courtrooms

An Ontario judge dismissed legal filings due to AI-generated errors, including fake case citations. Lawyers must verify AI content to avoid misleading courts and ensure accuracy.

Categorized in: AI News Legal
Published on: Jun 04, 2025
Ontario Judge Flags AI-Generated Legal Errors as Experts Warn of Growing Risks in Courtrooms

Ontario Judge Highlights Risks of Relying on AI for Legal Filings

An Ontario judge recently dismissed court submissions from a criminal defence lawyer due to serious inaccuracies stemming from apparent reliance on artificial intelligence (AI) tools. This incident brings attention to a growing concern in the legal profession: AI-generated content can produce false or fabricated information that may end up in court documents, potentially affecting case outcomes.

False Legal Citations and AI Hallucinations

Justice Joseph F. Kenkel, from the Ontario Court of Justice, ordered lawyer Arvin Ross to refile defence submissions after identifying multiple errors, including citations to fictitious cases and irrelevant civil case law. The judge emphasized that such mistakes are "numerous and substantial."

Experts warn that AI tools like ChatGPT are prone to producing what’s known as “hallucinations”—fabricated content that appears legitimate but lacks basis in actual law. Amy Salyzyn, an associate professor at the University of Ottawa’s faculty of law, explains that generative AI predicts text patterns rather than retrieving verified information. This can lead to invented cases or mismatched citations that risk misleading judges.

"You don’t want a court making decisions about someone’s rights or liberty based on something totally made-up," Salyzyn said. The possibility of such inaccuracies slipping through raises concerns about miscarriages of justice.

Judge Kenkel’s Directives to Prevent AI Errors

In his ruling on May 26, 2025, Justice Kenkel ordered Ross to produce a new set of submissions meeting strict standards:

  • Numbered paragraphs and pages
  • Precise “pinpoint cites” to specific paragraphs supporting legal points
  • Verification of case citations with links to authoritative sources like CanLII
  • Prohibition on using generative AI or AI-powered legal research tools for these submissions

Kenkel’s decision reflects an insistence on accuracy over convenience when it comes to legal research and submissions.

Wider Implications in Canadian Courts

This is not an isolated event. The case, R. v. Chand, is the second Canadian matter listed internationally for involving AI-generated hallucinated content. Earlier, in Zhang v. Chen, a B.C. judge reprimanded a lawyer for inserting two fake cases created by ChatGPT. That judge stressed that AI cannot replace professional legal expertise and underscored the importance of competence in using technology in legal practice.

Legal professionals globally face similar challenges as AI tools become more accessible. The risk lies in lawyers treating generative AI as a reliable research source without verifying the output thoroughly.

Responsibility and Ethical Considerations

Lawyers remain fully responsible for the accuracy and integrity of their filings, regardless of AI assistance. Nadir Sachak, a Toronto criminal defence lawyer, points out that while AI can be a useful resource, it requires careful oversight. Lawyers must review AI-generated content diligently before submission.

There is also an ethical dimension to billing clients for AI-generated work. Sachak notes that billing for hours not actually spent on manual legal work raises concerns about professionalism and fairness.

Guidance from Regulatory Bodies

The Law Society of Ontario has acknowledged the challenge and released a white paper outlining guidance for lawyers using generative AI. While details of investigations remain confidential, the society emphasizes adherence to professional conduct rules when incorporating AI into legal services.

Legal practitioners interested in improving their understanding and management of AI tools may consider specialized training. Resources such as Complete AI Training’s courses for legal professionals offer practical insights on using AI responsibly within the legal framework.

Key Takeaways for Legal Professionals

  • AI-generated legal content can include fabricated cases and inaccurate citations.
  • Verification of all AI-produced material is essential before submitting to courts.
  • Judges may reject filings that rely on unverified AI output, requiring costly refiling.
  • Lawyers remain ethically and professionally accountable for all documents they submit.
  • Regulatory bodies provide evolving guidance on responsible AI use in legal practice.

As AI tools become more common in law firms, maintaining rigorous standards for fact-checking and citation accuracy is critical to uphold the integrity of legal proceedings.