Upper Tribunal warns: verify AI-generated citations or face regulatory referral
The Upper Tribunal (Immigration and Asylum Chamber) has issued guidance and a clear warning on the use of AI after a self-employed barrister cited a fictitious Court of Appeal judgment generated by ChatGPT. In a Hamid judgment published in September 2025, the Tribunal referred Mr Muhammad Mujeebur Rahman to the Bar Standards Board (BSB).
The panel-Mr Justice Dove, President of the Chamber, and Judge Lindsley-found that Mr Rahman misused AI and attempted to mislead the Tribunal, breaching core duties of honesty, integrity, duty to the court, and competence. The conduct arose from his citation of the non-existent case Y (China) [2010] EWCA Civ 116 when drafting grounds of appeal.
At hearing, Mr Rahman distanced himself from YH (Iraq) [2010] EWCA Civ 116, the genuine case with the same neutral citation, and instead offered unrelated authorities. After the panel provided R (Ayinde) [2025] EWHC 1383 (Admin) at lunch, he maintained-based on "ChatGPT research"-that Y (China) was real, and later submitted internet print-outs rather than a judgment.
In correspondence, he apologised and initially suggested he meant to cite YH (Iraq), then accepted that case did not support his propositions. The Tribunal concluded he acted without integrity, honesty, and competence, but had not deliberately filed false material; rather, he failed to appreciate that large language models can invent non-existent authorities.
What the headnote makes clear
- AI large language models can fabricate judgments and citations that look plausible but are false.
- Following Divisional Court guidance in R (Ayinde) v London Borough of Haringey; Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383 (Admin), lawyers are professionally responsible for checking the accuracy of every citation and quotation using reputable legal sources. Misuse that results in false authorities is likely to be referred to regulators such as the BSB or SRA. Deliberate use of false material may justify police investigation or contempt proceedings.
- Unprofessional short-cuts that are likely to mislead the Tribunal are never excusable.
Practical steps for solicitors and counsel
- Verify every case citation and quotation on reputable databases (e.g., official court sites, BAILII, or recognised commercial services). Do not rely on AI output as a source of truth.
- Match neutral citations, party names, court, and decision dates. If any element cannot be verified, do not cite it.
- Keep an audit trail: note where and when each authority was checked and by whom; retain authentic copies or official links for the bundle.
- Supervise AI usage: set a written policy that treats AI as a drafting aid only; require human verification before any AI-assisted content reaches the court.
- Quote from the verified judgment text, not from summaries. Ensure pinpoint references are accurate.
- If an error is discovered, correct it immediately and transparently, and notify the court and the other side.
Consequences and professional duties
The Tribunal's message is simple: duty to the court, honesty, and competent work are non-negotiable. Using AI in a way that leads to false authorities will likely result in regulatory referral; deliberate deceit may trigger criminal or contempt routes.
Bottom line
AI can help with drafting, but it is never an authority. Check citations and quotations against reputable legal sources, or don't use them. Short-cuts that risk misleading the court are unacceptable.
Useful resources: BAILII (for verification of judgments) and the Bar Standards Board (professional conduct and guidance).
Your membership also unlocks: