Experts warn UAE's plan to hand half of government services to AI puts citizens at risk

The UAE plans to run half its government services through autonomous AI within two years. Experts warn the move repeats a documented pattern - Netherlands, Australia, and the U.S. all saw algorithmic systems harm thousands before anyone intervened.

Categorized in: AI News Government
Published on: May 07, 2026
Experts warn UAE's plan to hand half of government services to AI puts citizens at risk

Government AI Plans Risk Repeating Costly Mistakes, Experts Warn

The United Arab Emirates announced plans this month to run half its government services through autonomous AI systems within two years. The AI would operate as an "executive partner" that analyzes, decides, executes and improves without human intervention. Experts say the plan is reckless and could trigger a global race toward similar systems that repeat documented harms.

The warning comes from researchers who have watched governments worldwide delegate decisions to algorithms with disastrous results. Each case followed the same pattern: efficiency gains masked systematic failures that harmed citizens with no clear path to appeal.

Three Major Failures Show the Pattern

In 2021, a self-learning system in the Netherlands wrongly accused roughly 35,000 families of childcare benefit fraud. Parents were ordered to repay tens of thousands of euros they never owed. Homes were lost. More than 2,000 children were taken into state care. The system had baked discrimination directly into its design-flagging dual nationality and foreign-sounding names as fraud risk factors.

Australia's Robodebt scheme pursued 433,000 welfare recipients for A$1.7 billion in supposedly unlawful debts between 2015 and 2019. A Royal Commission later found the program "neither fair nor legal." Mothers testified that their sons killed themselves after receiving debt notices they had no way to challenge.

Arkansas and Idaho replaced nurses with algorithms to assess home care eligibility. People with cerebral palsy, quadriplegia and multiple sclerosis had their care cut by 20 to 50 percent overnight. Courts eventually halted the systems, but not before preventable medical complications occurred.

Scale Amplifies the Damage

Each case involved a single system within a single agency. The UAE proposes handling half of all government services this way.

When a caseworker makes a mistake, one person suffers. When an AI agent does, thousands can be affected before anyone notices. The opacity of these systems makes the problem worse. Agentic systems make decisions in sequence, with each step building on the last. By the time harm becomes visible, the causal trail is lost.

Arkansas's algorithmic health-benefit system was so opaque that no one-not even its creators-could fully explain how it worked. A federal court described it as "wildly irrational." Trade secrets and proprietary frameworks can hide how decisions are actually made.

Citizens Bear the Burden of Proof

AI systems reverse who must justify decisions. Citizens must prove their innocence rather than requiring the state to justify its actions. Those least able to navigate appeals-people with limited time, money, language proficiency and legal access-suffer most.

Consider a single mother whose childcare benefits freeze after an AI flags her bank activity. She navigates an appeals process that sends her from one automated system to another with no human contact, just as rent comes due. Or a migrant worker whose residency renewal is denied because the system cannot parse employer filings, rendering him effectively undocumented.

These are not hypotheticals. They are documented patterns that agentic AI intensifies.

Accountability Disappears

The UAE claims its guiding principle is "people come first." The design suggests otherwise. A government that evaluates ministries by speed of AI adoption is tracking vendor metrics, not citizen welfare.

A government's core responsibility is a duty of care grounded in human judgment. Speed of adoption is a vendor's metric. When governments embrace autonomous decisionmaking for efficiency, they sign away accountability.

Every algorithm scandal of recent years raises the same questions: Who is in charge, and who made the decision? In a government run by agentic AI, those questions have no clear answers. The system decides, updates itself, and moves on. Citizens have no recourse when things go wrong.

Democratic accountability does not erode through an open power grab. It erodes through procurement decisions that quietly displace human oversight. By undermining trust in institutions when it is already dangerously low, these systems serve the interests of tech companies driving the AI revolution.

An Alternative Path Exists

The UAE has the resources, talent and political stability to build a human-centered digital government. It could augment human decisionmaking rather than replace it, setting a global standard.

The costs of getting this wrong extend beyond the UAE. They are borne by a single mother in another country whose benefits are cut by an algorithm she never knew existed, and by countless others like her around the world.

For government workers evaluating AI proposals, the lesson is direct: AI for Government systems require human oversight and accountability mechanisms built into the design from the start. AI Agents & Automation cannot replace the judgment calls that protect citizens' rights.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)