UBC Law builds AI guardrails for justice, from deepfakes to due process

UBC's Allard Law is moving fast to keep courts credible as AI enters legal practice. New courses and partners focus on fair use, deepfake detection, and stronger evidence rules.

Categorized in: AI News Legal
Published on: Jan 13, 2026
UBC Law builds AI guardrails for justice, from deepfakes to due process

AI in the law: How UBC researchers are helping to future-proof justice

AI is moving into courts and law offices faster than most policies, procedures, and ethics codes. Fairness, accuracy, and accountability are on the line. UBC researchers are pushing to keep the legal system credible while using AI where it actually helps.

A focused initiative at Allard Law

The Peter A. Allard School of Law has launched a new initiative to integrate AI safely and equitably into legal practice and education. Backed by a $3.5-million gift from the estate of UBC law alumnus Gordon B. Shrum, the program supports courses on AI regulation, liability, copyright, and surveillance and privacy risks.

"The risks of AI in the legal system are numerous: overreliance on generative tools, fabricated digital evidence, intellectual property infringement and more," said UBC law lecturer Jon Festinger, who is helping lead the initiative. "But so too are the opportunities: AI could improve access to justice by providing free, basic legal advice or automating repetitive tasks, cutting time and costs for the public."

Regulation and legal accountability

The new course includes community events that bring together legal professionals, policymakers, and the public. As Festinger put it: "We don't want law evolving haphazardly, by pretending this technological change isn't happening or reacting too late. We want to build a forward-facing, inclusive and legally sound set of rules and norms by which we govern ourselves."

Key questions are on the table: What should be regulated, and how? If a harmful deepfake is uploaded to a popular site, should the host face criminal liability-or something else? The initiative is designed to surface these issues early and push for clear, workable answers.

This effort complements other Allard projects, including the UBC AI & Criminal Justice Initiative led by professor Benjamin Perrin, and a Perrin-led study on AI use by police in Canada. Looking ahead, the team is exploring specializations, interdisciplinary teaching, student placements with tech companies, and a new course on AI workflows for legal professionals in partnership with UBC Extended Learning.

Evidence: admissibility and AI fabrication risk

Digital evidence is now routine, but courts treat it inconsistently. Dr. Moira Aikenhead, a lecturer at Allard Law, notes that electronic documents must be authenticated in principle, yet methods vary widely in practice.

"In this landscape, fabricated evidence could be accepted as authentic, or genuine evidence could be ruled inadmissible based on allegations of fabrication," she said. Her view is clear: the legal system needs efficient, reliable ways to verify authenticity-and soon.

Detection: deepfakes and text hallucinations

On the technical front, Dr. Vered Shwartz, UBC assistant professor of computer science, is contributing to a new AI Safety Network launched by the Canadian Institute for Advanced Research. The goal: combine multiple detection tools to flag synthetic media-deepfakes, manipulated images, and even text hallucinations.

"We want it to be an iterative approach, so that as fabrication technology improves, so too do our detection methods," said Dr. Shwartz. It won't be perfect, but it's better than the status quo-and it may deter misuse. She emphasized collaboration with legal experts: generative tools are widely accessible, and the justice system needs shared standards to tell real from fake. Plans are underway to meet with Allard's team to exchange expertise and explore collaboration.

What this means for your practice

  • Adopt an AI use policy: define allowed tools, mandate human review, log AI-assisted work, and set confidentiality rules (no client data in public models).
  • Tighten evidence workflows: preserve originals, metadata, and hashes; document chain of custody; request authenticity affidavits and expert declarations where appropriate.
  • Set deepfake protocols: challenge suspect media early, demand source files and generation details, and seek protective orders to curb distribution of harmful content.
  • Discovery with AI in mind: when AI is used, seek prompts, versions, settings, and output logs; address privilege and spoliation risks on both sides.
  • Vendor diligence: assess model provenance, data handling, audit features, and indemnities; require accuracy, bias, and security disclosures in contracts.
  • Client communication: disclose responsible AI use, clarify limitations, and set expectations on accuracy, timing, and costs.
  • Risk management: update malpractice coverage, incident response plans, and media takedown playbooks for synthetic content threats.
  • Ongoing competence: prioritize CLE on AI, evidence, and ethics; consider structured upskilling for your team. For curated options, see AI courses by job.

Bottom line

AI will keep advancing; the law has to keep pace on its own terms. UBC's Allard initiative is building the legal, technical, and teaching infrastructure to do exactly that.

For legal professionals, this is a practical moment: put guardrails in place, upgrade evidence procedures, and build relationships with technologists. That's how we protect the public-and keep the system worthy of trust.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide