AI could decide court cases in minutes, warns senior UK judge - but courts should baulk

AI can draft, summarize, and sort in minutes-but it shouldn't judge. Use it to speed the grind, with strict checks, trusted sources, and humans owning the final call.

Categorized in: AI News Legal
Published on: Oct 22, 2025
AI could decide court cases in minutes, warns senior UK judge - but courts should baulk

AI could decide cases in minutes. Here's why the courts should resist - and how your practice should adapt

AI is everywhere in legal work. As Master of the Rolls Sir Geoffrey Vos put it, it's being used for "every purpose under the sun" - more chainsaw than scalpel: hugely useful in the right hands, "super dangerous" in the wrong ones.

He's clear on two things. First, large language models can draft, summarize, and research faster than any junior. Second, letting them decide cases would be a mistake, even if they can process a two-year matter "in a couple of minutes."

What AI is good for right now

  • First-pass contract drafting and playbook alignment (with human edits).
  • Research scaffolding and pinpoint summaries that cut out drudgery.
  • Document sorting, chronology building, and issue spotting before real analysis begins.

Use it to save time. Not to make judgment calls.

Why "AI judges" are a dead end (for now)

  • Judicial decisions are final in practice. You can't "undo" a wrong answer produced by a stochastic tool.
  • Models don't have emotion, idiosyncrasy, empathy, or insight - the human factors that make justice feel legitimate.
  • Machine learning is trained on a snapshot of thinking. Human reasoning and societal values keep moving.

Recent wake-up calls: fake citations in UK courts

June saw a public warning after two cases were polluted by invented case law. In a claim against Qatar National Bank, 45 citations were filed - 18 were fabrications, with bogus quotes. Public chatbots were involved, and checks were skipped.

In a separate regulatory matter, a pupil barrister cited non-existent authorities five times, reportedly after relying on browser AI summaries.

Dame Victoria Sharp warned of "serious implications for the administration of justice and public confidence" if AI is misused. Sanctions may include public warnings, contempt proceedings, and referral to police. Her reminder stands: these tools can produce coherent, plausible answers that are completely wrong.

The practical playbook: minimize risk, keep the speed

  • Adopt a written AI policy: permitted uses, banned uses, and sign-off thresholds.
  • Use legal-grade tools with citation retrieval and source linking. Treat public chatbots as untrusted brainstorming only.
  • Verification as a rule: no AI-generated authority enters a document until checked against primary sources.
  • Force citations: require URLs or neutral citations for every proposition. No source, no inclusion.
  • Keep an audit trail: prompts, outputs, and verification notes stored with the matter.
  • Data hygiene: never paste client confidentials into tools without a DPIA, vendor terms review, and redaction where possible.
  • Human accountability: assign a named reviewer. The buck stops with you, not the tool.
  • Client comms: disclose AI use where it affects deliverables, pricing, or timelines.
  • Training: run short, repeated drills on prompt discipline, hallucination detection, and citation checks.

Tooling choices that reduce self-inflicted wounds

  • Prefer systems that retrieve from trusted legal databases and show sources inline.
  • Disable features that "guess" citations. Favor strict retrieval over creative answers.
  • Test on your own matters: measure hallucination rates, red-team for edge cases, and set no-go zones.
  • Maintain a whitelist (e.g., established legal research platforms) and a blacklist (public chatbots for anything authority-related).

If courts start using AI in procedure

  • Ask for explainability: what input was used, what model, and how was output verified?
  • Protect the right to be heard: ensure any AI triage doesn't block human consideration of arguments or evidence.
  • Record objections promptly where automated steps may prejudice your client.

Bottom line

Use AI for speed and structure. Keep judgment, ethics, and accountability with humans. As Sir Geoffrey Vos suggests, treat AI like a chainsaw: incredibly useful - and only safe with rules, training, and a steady hand.

Resources

Upskilling your team

If your firm is formalizing AI literacy, structured courses can speed up safe adoption. See curated options by role at Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)