UK launches AI Growth Lab to fast-track safe AI testing and cut red tape

UK plans an AI Growth Lab to let teams trial tools in a supervised sandbox, trimming red tape without skimping on safety. A call for views will set how it runs and who leads.

Categorized in: AI News Government
Published on: Oct 22, 2025
UK launches AI Growth Lab to fast-track safe AI testing and cut red tape

Government to launch AI sandbox scheme: AI Growth Lab to cut red tape and speed safe testing

The government has outlined plans for an AI Growth Lab to let organisations test and pilot responsible AI in a controlled "sandbox" environment. Technology secretary Liz Kendall announced the scheme on 21 October at the Times Tech Summit, alongside a public call for views on how it should run.

The goal is simple: reduce unnecessary bureaucracy, generate real evidence of impact, and get useful AI into public services and UK industries faster-without compromising safety.

What an AI sandbox means in practice

In a sandbox, teams can test AI products with certain regulatory rules temporarily relaxed, under supervision, to gather the data needed to prove real-world value and safety. It lowers the barrier to pilot work while keeping guardrails in place.

"To deliver national renewal, we need to overhaul the old approaches which have stifled enterprise and held back our innovators," Kendall said. "We want to remove the needless red tape that slows progress so we can drive growth and modernise the public services people rely on every day. This isn't about cutting corners - it's about fast-tracking responsible innovations that will improve lives and deliver real benefits."

Who should run it: government or regulators?

The public call for views asks whether the AI Growth Lab should be led centrally by government or operated directly by regulators. That decision will shape speed, consistency across sectors, and how quickly successful pilots can move to deployment.

For departments and regulators, this is a chance to align on risk, data access, and evaluation standards so pilots don't stall later in procurement or assurance.

Industry reaction

TechUK welcomed the move. Deputy CEO Antony Walker said the lab "represents a strong, positive step" that can help companies "safely develop, scale and deploy AI in key sectors of the UK economy".

He added that, done well, the lab should build on lessons from existing sandboxes and work closely with AI businesses to deliver "tangible results and real-world impact".

There's a proven playbook

The UK pioneered sandbox models with the Financial Conduct Authority's initiative in 2016. That work helped firms test new financial services safely, and it's a useful template for AI pilots in regulated spaces. See the FCA's regulatory sandbox for context: FCA Regulatory Sandbox.

Healthcare has moved early too. The MHRA ran a regulatory AI sandbox pilot in 2024 for standalone AI medical devices and has received £1m to test further AI-assisted tools that could speed drug discovery and clinical trial assessments. The agency also joined the HealthAI global regulatory network as one of 10 founding countries in June 2025. Learn more about the regulator here: MHRA (gov.uk).

Evidence is building. A study in the British journal of clinical pharmacology found MHRA clinical trial assessors were able to cut approval times by more than half with AI support.

Linked reforms: digital planning checks

Alongside the AI Growth Lab, chancellor Rachel Reeves confirmed progress on a growth-focused regulatory system. One example: digital planning checks where developers submit photo evidence online and authorities approve using AI models-speeding up decisions and increasing consistency.

What government teams can do now

  • Nominate a sandbox lead and core team across policy, legal, data, procurement, and risk.
  • Shortlist 2-3 AI use cases with clear public value, measurable outcomes, and manageable risk.
  • Map data needs early: access, quality, privacy, retention, and audit trails.
  • Define success criteria upfront: safety thresholds, accuracy, cost, time saved, equity impacts.
  • Align with your regulator on assurance requirements so pilots transition smoothly to live.
  • Set a responsible AI plan: risk assessment, human oversight, bias testing, and incident response.
  • Prep procurement pathways for scaling successful pilots (templates, DPIAs, model cards, SLAs).
  • Plan workforce skills: who needs training to run, monitor, and govern AI tools.

Skills and training

If your team needs structured upskilling for pilot design, evaluation, and safe deployment, scan current options by role here: AI courses by job.

What to expect next

The call for views will inform how the AI Growth Lab is set up and governed. Departments, regulators, and public bodies should prepare responses that reflect practical needs: data access, safety standards, oversight, and routes to scale.

Done well, the lab can cut delays, prove value faster, and get responsible AI into services where it counts-while keeping public trust front and center.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)