Rights at risk as UK AI oversight lags and watchdog budgets stagnate

UK watchdogs say they have the laws to police AI, but not people or funds to enforce them. They want coordination, audit authority, and funding to act before harm hits.

Categorized in: AI News Government
Published on: Feb 13, 2026
Rights at risk as UK AI oversight lags and watchdog budgets stagnate

UK regulators need more resources to tackle AI

The UK has laws to deal with AI harms. What it lacks is capacity.

Senior regulators told Parliament's Joint Committee on Human Rights that funding gaps and fragmented oversight are holding back enforcement. The tools exist, but the system can't move fast enough to stop harm before it happens.

What regulators told Parliament

Existing legislation, including the Equality Act 2010, can address AI-driven discrimination and rights violations. The problem is scale and speed.

Mary-Ann Stephenson, chair of the Equality and Human Rights Commission (EHRC), said resources are the biggest blocker. "There is a great deal more that we would like to do in this area if we had more resources." The EHRC budget has been fixed at £17.1m since 2012 - a 35% real-terms cut.

Andrew Breeze, Ofcom's director for online safety technology policy, warned that oversight is lagging behind fast-moving AI deployments. Regulators have powers over use cases, not the systems themselves, and none can approve or reject AI products before launch.

Where the framework falls short

Enforcement is mostly reactive. Cases are picked up after people are harmed - not before.

There's also no single mandate for AI. More than a dozen UK regulators touch the space, but coordination is uneven and gaps are easy to exploit.

Coordination over a super-regulator

Some MPs and peers floated a dedicated AI regulator. Baroness Chakrabarti drew the comparison with medicines: "We would not dream of not having a specific medicines regulator."

Regulators leaned toward a strong coordinating body instead, given AI cuts across sectors. They highlighted joint working models already in place and called for tighter information-sharing and compulsory audit powers. Elizabeth Denham said those steps would stop large tech firms from gaming the seams between regulators.

Breeze pressed for deeper international cooperation, especially on AI-generated disinformation. He noted the Online Safety Act doesn't let Ofcom regulate legal but harmful misinformation, except where children are involved.

Rights risks that can't be ignored

Civil liberties groups warned that policy shifts have weakened protections against automated decision-making. Silkie Carlo of Big Brother Watch cautioned that AI-enabled mass surveillance could "spiral out of control," with systems built for one purpose easily repurposed for another.

What government and regulators can do now

  • Fund core enforcement. Tie budget uplifts to measurable outcomes: audits completed, guidance issued, cases resolved, and harms prevented.
  • Stand up a permanent AI coordination unit across regulators with a single risk register, joint guidance, and a shared incident response.
  • Create statutory data-sharing gateways so regulators can exchange evidence quickly, including protected channels for confidential information.
  • Legislate for compulsory algorithmic audits where there is credible risk of discrimination, safety issues, or market harm.
  • Require algorithmic impact assessments for public procurement and high-risk private deployments that affect rights or access to services.
  • Pilot pre-deployment "assurance sandboxes" to test high-risk systems before they reach the public.
  • Pool specialist talent. Build a shared roster of AI engineers, data scientists, and forensic auditors available to all regulators.
  • Issue joint, plain-English guidance on the Equality Act, data protection, and safety duties as they apply to AI.
  • Set minimum transparency for automated decisions that affect individuals: explanation, contestability, and human review.
  • Coordinate internationally on standards for model evaluation, provenance, and watermarking to curb synthetic disinformation.
  • Close the gap on "legal but harmful" misinformation exposure with targeted powers and clear thresholds, especially around elections.

Funding priorities that move the needle

  • Investigation and enforcement teams focused on AI cases.
  • Independent testing capacity: red-teaming, bias and safety evaluation, and secure compute for evidence handling.
  • Legal capacity for faster litigation and interim measures when urgent harm is likely.
  • Mandatory training for inspectors and case officers on AI systems, validation, and audit methods.
  • Public reporting lines and whistleblower protections specific to automated systems.
  • Community engagement to surface harms affecting protected groups early.

The bottom line

The UK doesn't need to rewrite all its laws to keep people safe from AI harms. It needs to fund the regulators it already has, give them the means to work as one system, and act before damage lands on the public.

Start with money, coordination, and audit powers. Move to shared talent, clear guidance, and proactive testing. That's how enforcement gets ahead of deployment.

Upskill your team

If you're building internal capability for AI assurance and oversight, see role-based options at Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)