Judiciary issues updated AI guidance: practical takeaways for legal teams
Date of publication: 31 October 2025
The Courts and Tribunals Judiciary has released refreshed guidance on the use of AI by judicial office holders. It replaces the April 2025 version and applies across courts and tribunals.
While written for the bench, the message applies to all legal representatives. Use AI as a support tool, verify everything it produces, and keep confidential information out of public systems.
Why this matters
Public AI chatbots can produce confident but wrong answers, including fabricated cases and quotes. The output reflects the data they were trained on and can be biased, outdated, or incomplete.
Judges and lawyers are accountable for what goes before the court. If AI helped prepare it, the obligation to check accuracy does not change.
Core principles from the guidance
- Verify all AI output: Treat it as a draft, not a source of truth. Cross-check against authoritative legal sources.
- Protect confidentiality: Do not input private or sensitive information into public AI tools. Assume anything you type could become public.
- Support, don't substitute: AI can assist with admin and summaries, but it must not replace direct engagement with evidence and law.
- Own the result: Judicial office holders and legal representatives are responsible for the material they produce or submit.
Practical guardrails you can apply now
- Turn off chat history where possible and avoid sharing any case-identifying details. Even then, assume disclosure is possible.
- Refuse unnecessary app permissions on mobile AI tools that request device access.
- Use secure, work-provided devices and approved systems for any AI-related work.
- If confidential data is disclosed by mistake, inform your leadership judge/Judicial Office and follow data incident protocols.
- Discuss AI use with clerks, judicial assistants, and staff. Ensure appropriate approvals if using HMCTS or MoJ devices.
Known risks you should expect in court
- Hallucinations: Non-existent cases, bogus citations, incorrect legal propositions, and factual errors.
- Deepfakes: Text, images, or video that look authentic but are fabricated.
- "White text" prompts: Hidden instructions in documents that are invisible to humans but readable by systems.
- Bias: Outputs mirror the biases and gaps in training data; mitigate actively.
Responsibilities of legal representatives
You are responsible for the material you submit. There is usually no need to disclose that AI assisted, provided you have independently verified accuracy and appropriateness.
Given recent missteps, some judges may remind counsel of their duties and ask for confirmation that citations and research have been checked against authoritative sources.
Unrepresented litigants using AI
Chatbots are increasingly used by litigants in person. If submissions appear AI-generated, it is appropriate to inquire, ask what checks were done, and explain that the litigant is responsible for what they present.
- Clues include unfamiliar or US-style citations, American spelling, persuasive prose with obvious errors, and stray lines like "as an AI language modelβ¦".
Appropriate uses inside courts and chambers
- Summarising large bodies of text (always validate the summary).
- Drafting presentation outlines.
- Admin support: emails, meeting transcripts/summaries, memoranda.
What to avoid
- Legal research: Don't rely on chatbots to find new information you can't verify. Always check against maintained, authoritative sources.
- Legal analysis: Current public tools do not produce reliable reasoning.
Expectations for the profession
Technology Assisted Review (TAR) has long been part of disclosure. Generative AI is different: it predicts likely words, not verified facts. Treat its output like untested evidence-useful as a prompt, never as a citation source without checks.
Courts should expect some parties to use AI in preparation and should ensure all material presented has been independently verified.
Recent incidents and perspective
The Upper Tribunal (Immigration and Asylum Chamber) recently warned lawyers after fictitious Court of Appeal citations were put forward, apparently generated by a chatbot. Similar issues have surfaced in other immigration cases.
As Sir Geoffrey Vos put it: AI is an "important, innovative and useful tool," but "just like a chain saw, a helicopter or a slicing machine, in the right hands it can be very useful, and in the wrong hands, it can be super-dangerous."
Managing bias and fairness
Bias is not a theoretical risk; it shows up in training data and outputs. Use the Equal Treatment Bench Book to inform decisions and correct distortions where they arise.
Equal Treatment Bench Book - Judiciary
Where to find the guidance
The guidance is published online to promote transparency and public confidence. Check the Courts and Tribunals Judiciary website for the latest version and any updates.
Courts and Tribunals Judiciary
Upskill your team (optional resource)
If your chambers or firm is setting guardrails and training staff on prompt quality and verification, this practical resource may help:
Prompt courses - Complete AI Training
Your membership also unlocks: