Home Office use of AI in asylum decisions likely unlawful, legal opinion finds

Three senior barristers say the Home Office is likely breaking UK law by using AI tools in asylum decisions without telling applicants. An 84-page opinion cites breaches of fairness, data protection, and equality law.

Categorized in: AI News Legal
Published on: Mar 17, 2026
Home Office use of AI in asylum decisions likely unlawful, legal opinion finds

Legal opinion finds Home Office asylum AI use likely unlawful

Three senior barristers have concluded that the Home Office's use of artificial intelligence in asylum decision-making breaches UK law, particularly because applicants are not told the tools are being used.

Robin Allen KC and Dee Masters of Cloisters Chambers, alongside Joshua Jackson of Doughty Street Chambers, published the 84-page legal opinion on 17 March 2026. The analysis, commissioned by digital rights group Open Rights Group, identifies breaches of procedural fairness, data protection, and equality law.

The Home Office deploys two generative AI tools in asylum processing. The Asylum Case Summarisation (ACS) tool creates written summaries of applicant interviews, while the Asylum Policy Search (APS) tool retrieves country-of-origin information for caseworkers. Both generate new text rather than simply organizing existing information.

Transparency failures create legal exposure

The opinion argues that asylum applicants have a common law right to know when AI is used in their cases, how it functions, and to access the AI-generated material. The Home Office currently does not inform applicants that AI tools are involved in their assessments.

This absence of disclosure is "likely to be unlawful" as a matter of procedural fairness, the barristers conclude. The stakes justify transparency: asylum decisions determine whether people receive protection in the UK.

Applicants cannot correct errors in AI summaries they never see. An ACS pilot found the tool produced inaccurate summaries 9% of the time. Five percent of APS users reported lacking confidence in the tool's accuracy.

Risk of material errors in decisions

If caseworkers rely on AI-generated summaries instead of reviewing underlying evidence in full, they risk overlooking relevant considerations. This creates "a significant risk" that decisions will be based on incomplete information.

The barristers identify a particular concern: inaccurate AI summaries could lead to decisions founded on material errors of fact. No safeguards currently require caseworkers to cross-check AI outputs against original source material.

The ACS tool processes sensitive personal data-including information about race, religion, political beliefs, and sexual orientation-raising obligations under UK GDPR for transparency, accuracy, and access rights.

Duty to assess before deployment

The Home Office may be under heightened legal obligations to investigate AI tool performance before using them in asylum determinations. The opinion references the "Tameside duty of inquiry"-a public law principle requiring decision-makers to properly investigate relevant matters.

The department risks breaching this duty if it fails to assess tool accuracy, effects on decision quality, discrimination risks, or whether non-AI alternatives could achieve the same efficiency gains.

No published Equality Impact Assessment exists. This means the Home Office cannot demonstrate it has satisfied the Public Sector Equality Duty or assessed and monitored potential discriminatory effects.

Implications for asylum applicants

The opinion enables asylum applicants to challenge decisions where AI tools were used in their assessment. Applicants can now point to specific legal grounds-procedural fairness, data protection, and equality law-when contesting determinations.

Limited oversight currently exists. Civil society and regulators such as the Independent Chief Inspector of Borders and Immigration have restricted visibility into how these tools operate, reducing accountability and public scrutiny.

Robin Allen KC said: "If AI tools are influencing asylum decisions, there must be full transparency about how those systems operate and how their outputs are used. Without that transparency, it becomes extremely difficult to ensure that decisions affecting fundamental rights are lawful and fair."

For legal professionals advising asylum applicants, the opinion provides a framework for identifying unlawful AI use and constructing challenges based on established public law principles. Learn more about AI for Legal professionals to understand how AI systems should be properly implemented and reviewed in legal contexts.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)