SFU and Caseway Make 100 Million Court Decisions Searchable for AI: Will It Help People Without Lawyers?

SFU and Caseway are indexing 100M court decisions so AI can point to the original rulings. The study tests whether this helps people without lawyers make better early calls.

Published on: Jan 10, 2026
SFU and Caseway Make 100 Million Court Decisions Searchable for AI: Will It Help People Without Lawyers?

SFU launches legal-AI collaboration with Caseway to improve access to justice

January 09, 2026

Can making 100 million court decisions fully searchable improve outcomes for people without lawyers? Simon Fraser University's School of Computing Science and Caseway AI are setting up a research program to find out.

What's being built: machine-readable precedent at scale

The collaboration is being developed as a Mitacs-funded project (application in progress). Given the scale, both teams have already started technical work.

The goal: publish and index more than 100 million Canadian and U.S. court decisions in a format that works for people and modern AI systems. Each decision is structured, indexed, and discoverable so large language models can reference the primary source directly-no summaries, no hearsay.

As Dr. Angel Chang puts it: "This research is not about replacing lawyers or automating legal advice. It's about asking a careful, evidence-based question: if people without lawyers can access accurate, searchable court decisions through systems that use artificial intelligence, does that change how they understand their options and make early legal decisions? That's what we want to measure."

Why this matters: fewer hallucinations, more verifiable answers

Most general-purpose AI tools don't have direct access to real court decisions. They lean on blogs, forums, and secondary commentary, which increases hallucinations and missing context. Publishing the primary source changes the baseline.

Caseway's approach makes official decisions linkable so AI outputs can point back to the original judgment for verification. That enables practitioners and self-represented people to check claims against the exact text a judge wrote.

For context on AI hallucinations, see guidance from NIST's AI Risk Management resources. For the funding mechanism powering collaborations like this, learn more about Mitacs.

Research design: measure behavior, not hype

The study focuses on early-stage decision-making for people without lawyers. Instead of testing whether AI can give advice, the team will evaluate whether access to accurate, searchable precedent changes decisions upstream.

  • Comprehension of relevant precedent and legal standards
  • Confidence in next steps (e.g., timelines, documents, key issues)
  • Quality of early choices (e.g., venue, claim framing, citations)
  • Ability to verify AI explanations against linked primary sources

Technical track: retrieval and embeddings

Under Dr. Chang's supervision, SFU students are prototyping core infrastructure:

  • Retrieval system design that supports jurisdiction, court level, time range, and topic filters
  • Embedding experiments for semantic search across mixed jurisdictions and citation styles
  • Ranking evaluation using relevance judgments and outcome-focused metrics
  • Citation graph modeling to surface influential decisions and related precedent
  • Human-in-the-loop review to check precision, recall, and link quality

Context across Canada: public-interest AI, primary sources first

The work connects with parallel efforts at the University of British Columbia on legal AI. The shared throughline: make primary legal sources machine-readable and publicly accessible, then measure real-world impact.

As Caseway CEO Alistair Vigier explains: "Right now, most AI systems answer legal questions by pulling from Reddit threads, blogs, and second-hand commentary because the real court decisions simply aren't accessible to them. Our goal is to change that by making official judicial decisions searchable at scale and usable by modern language models, so when artificial intelligence explains the law, it can point directly to the same sources judges rely on."

The core question

Will better access to real court decisions lead to better outcomes for people without lawyers? The hypothesis is simple: ground AI outputs in primary sources and you improve accuracy, trust, and early decisions.

If the data supports that, this project could set a new standard for publishing legal information for AI systems-transparent, verifiable, and anchored to the same sources courts use.

What practitioners should watch

  • Quality of retrieval and ranking across jurisdictions and topics
  • How often AI-linked answers match the cited decision's actual holding
  • Changes in pro se behavior: timelines, filings, and issue-spotting
  • Policy implications for public legal information and citation norms

If you're building skills at the intersection of law and AI, explore focused programs by role here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide