AI in law: friend or foe? The Upper Tribunal's warning on public AI and false citations
Artificial intelligence can speed up legal work. It can also wreck your case and your reputation if you treat it like a trusted researcher.
In UK v Secretary of State for the Home Department ([2026] UKUT 81 (IAC)), sitting under its Hamid supervisory jurisdiction, the Upper Tribunal delivered a blunt message: misuse of public AI, false citations, and poor supervision will trigger regulatory scrutiny.
Case 1: The adviser, a phantom citation, and public AI
An accredited immigration adviser filed grounds citing a case that did not exist on BAILII. The formal citation actually matched an unrelated equal pay decision.
The Tribunal ran its own checks using Google's AI. It returned plausible judges and details-yet the case didn't exist. The lesson: public AI can fabricate authorities that look right on first glance.
The Panel stressed two points. First, placing false authorities before the court due to inadequate checking will ordinarily justify regulatory referral. Second, putting client letters or decisions into open-source AI breaches confidentiality and waives privilege-engaging duties to notify the regulator and consult the ICO.
Here, the adviser self-referred and accepted training. Given the self-report and context, the Tribunal took no further action-but the warning stands.
Case 2: A junior caseworker, broken supervision, and SRA referral
In a separate matter, a firm filed grounds riddled with incorrect and untraceable citations, including a misdescribed court and cases that could not be found. The documents were attributed variously to a "Senior Solicitor," a "Legal Assistant," and, ultimately, a "part-time trainee lawyer" who turned out to be a very junior caseworker.
The COLP lacked clarity on staff roles, underestimated how readily junior staff could use AI, and kept no reliable records of who drafted what or which precedents were used. The Tribunal saw a real risk that similar errors existed elsewhere.
Outcome: the Tribunal referred the firm's COLP to the SRA. Intention did not save the day; supervision and accuracy did not meet professional standards.
On judicial resources and professional duty
"The Upper Tribunal cannot afford to have its limited resources absorbed by representatives who place false information before the Tribunal⦠The citation of cases which do not exist sends that judge on a fool's errand."
"The primary duty of regulated lawyers is⦠to the cause of truth and justice. That duty is not discharged by professional representatives who knowingly or recklessly place false information before [it], or who fail to supervise the work undertaken by other members of their firm for whom they are responsible."
The core takeaways
- Public AI is unsuitable for legal research. Outputs may be plausible and still be wrong.
- Uploading client material to public AI tools breaches confidentiality and may waive legal advice privilege.
- False or unverified citations will usually lead to regulatory referral.
- Supervision is not optional. Leaders must control who drafts, how sources are checked, and what tools are approved.
Minimum viable AI policy for law firms
- Prohibit public AI for legal research and any client data. Define "public" clearly and list banned tools.
- Approve specific tools (if any) that meet security, privacy, and audit standards. Document DPIAs where required.
- No client content in public tools, ever. Treat this as a confidentiality and privilege issue, not just IT.
- Logging and audit: track who drafted, which sources were used, and who reviewed. Keep versioned records.
- Two-stage review for any submission citing authorities: drafter verifies; supervisor independently re-verifies.
- Training and access control: junior staff get clear rules, limited tool access, and documented supervision.
- Incident protocol: if a false citation slips through, immediately notify the court as appropriate, inform the client, and consider self-referral to the regulator and contact with the ICO.
Citation verification workflow you can rely on
- Locate each case on a primary source (e.g., BAILII) or official reporter. Screenshots or PDFs are not enough-get the full text link.
- Confirm the court, date, neutral citation, and judges match your reference.
- Read the relevant passages yourself. Do not rely on summaries, blogs, or AI-generated extracts.
- Check that the proposition you advance is actually supported by the passages cited.
- Attach pinpoint citations and keep a disclosure-ready pack of the authorities relied on.
Supervision: what leadership must do
- Define who may draft pleadings and who must sign off. No "ghost authors."
- Maintain a matter-level activity log recording the drafter, reviewer, sources checked, and tools used.
- Issue a written AI and research policy; brief all staff; test understanding; repeat quarterly.
- Audit a sample of filings monthly for citation accuracy and policy compliance. Fix gaps fast.
Regulatory lens
- Expect referral where false authorities reach the court, even if the initial error was careless.
- Confidentiality and privilege breaches via public AI engage regulatory and data protection duties.
- Revisit your obligations under the SRA Standards and Regulations: SRA Standards and Regulations.
Practical training resources
- AI for Legal - practical guidance and courses on responsible AI use, legal research with LLMs, and compliance.
- AI Learning Path for Paralegals - structured training for junior staff and caseworkers on safe, supervised AI workflows.
The signal is clear. Verify every authority. Keep client data out of public AI. Document supervision. If you can't prove your process, you don't have one.
Your membership also unlocks: