Great (public) expectations: the growing disconnect on AI regulation
Government is easing off the throttle on AI regulation just as the public wants firmer control. The UK has paused on a comprehensive bill, and the EU is exploring changes that could relax parts of existing data and AI laws. The message from voters is the opposite: make AI safe before it's sold, and make that promise enforceable.
For those in government, this isn't a theory debate. Weak rules don't speed adoption; they scare buyers, confuse suppliers and leave public bodies to carry the risk. Clear, binding safeguards are now the fastest route to safe deployment at scale.
What the public is asking for
- Safety before speed: 89% say AI products shouldn't be rolled out until proven safe, even if that slows things down.
- Ethics over economics: almost three-quarters support restricting or banning some AI uses on social or ethical grounds, even if there's economic upside.
- Sovereignty counts: fewer than one in four oppose prioritising UK AI capability over importing more powerful tools; only 38% want to loosen rules just to keep up internationally.
- Independence and teeth: 89% support independent regulation; big majorities back removal powers, stop-use orders, and mandatory safety testing.
- Transparency on costs: strong support for disclosure of societal impacts and the real economic costs, including energy and resource use.
Trust is low, risk is high
People who feel most exposed to harm from AI-older citizens, those without higher education, and those less confident online-also feel least able to influence how it's governed. 84% worry government will put tech sector needs ahead of the public. Over half don't trust large tech firms to act in the public interest.
Inside the system, the pattern is the same. Public bodies report uncertainty about legality and oversight, especially for frontier models. Even insurers are pulling back, with exclusions for AI risks creeping into corporate policies. Adoption without clear guardrails shifts liability from vendors to departments, local services and frontline staff.
Why sandboxes alone won't cut it
AI sandboxes can unblock testing. But without independent oversight, enforceable standards and transparency, they won't build confidence. The health sector's experience says it plainly: where regulatory clarity is thin, leaders hesitate, and pilots stall.
EU direction matters for the UK
Proposed EU "simplification" across data and AI files would affect the hard-won rules that make AI verifiable and accountable. Looser data protections mean weaker model accountability. For context on current frameworks, see the European Commission's overview of EU data protection rules (GDPR) and its approach to the AI Act.
Policy actions government can take now
- Make "safe before sale" the rule for high-risk and frontier models. Require pre-deployment safety testing, model evaluation and documentation as standard.
- Give regulators recall and stop-use powers. If a system causes harm, they must be able to pull it from public access fast.
- Mandate independent testing and red-teaming. Vendor self-attestation isn't enough for systems that influence rights, services or safety.
- Require transparency on total cost of AI. Include energy use, resource demands, error rates, incident logs and escalation paths.
- Set procurement gates. No public contract without a completed risk assessment, auditing access, data governance plan and clear lines of accountability.
- Create an AI incident reporting regime. Standard definitions, time-bound reporting and a shared learning loop across departments.
- Protect data rights. Any relaxation that weakens auditability or redress will backfire on trust and slow responsible uptake.
- Resource regulators. Independence needs budget, technical talent and the legal authority to act.
How to run adoption with guardrails
- Use sandboxes with independent monitors. Publish test plans, metrics and results. No hidden exemptions.
- Start with narrow, reversible use cases. Gate expansion on measured safety and service outcomes.
- Share evaluation assets. Common test suites, model cards and incident taxonomies cut duplication across government.
- Put frontline users in the loop. Build feedback channels that can pause or roll back deployments on evidence, not opinion.
- Clarify liability in contracts. No deployment without vendor commitments on errors, support and remediation.
Drop the "builders vs blockers" myth
Clear rules don't slow progress-they reduce uncertainty. Strong pre-market checks and meaningful oversight make pilots safer, procurement cleaner and insurers less wary. That's how you get deployment that sticks, not press-release pilots that quietly fade.
A 90-day plan for departments
- Publish a "safe before sale" policy for AI used in your services, with thresholds for frontier models.
- Stand up an AI risk board with external experts and user representatives. Meet monthly. Publish minutes.
- Adopt a standard model evaluation pack: threat models, red-team protocols, bias testing, and security baselines.
- Update procurement templates to require audit access, data governance, incident reporting and kill-switch procedures.
- Run one sandbox with independent oversight and a public test report. Treat it as the template for future projects.
- Train product owners and commercial teams on AI risk, evaluation and contract clauses. Keep it practical and scenario-based. For curated upskilling by role, see courses by job.
The bottom line
The public wants independent oversight, clear rights and real enforcement. Government wants adoption that improves services and productivity. "Safe before sale," backed by testing, transparency and enforcement powers, is the quickest path to both.
Your membership also unlocks: