Public scepticism puts Starmer's AI growth strategy at risk
Twice as many Britons see AI as a risk (39% vs 20%), imperilling UK growth plans. TBI urges early public input, safety tests and clear outcomes to rebuild trust.

Public scepticism puts UK's AI growth plan on the line
Nearly twice as many Britons see AI as an economic risk than an opportunity. New polling from the Tony Blair Institute (TBI) and Ipsos shows 39% view AI as a threat and 20% see it as an opportunity.
This trust gap could stall the Government's plan to put AI at the core of growth and productivity. The warning lands a week after the Prime Minister signed a new tech deal with Donald Trump, alongside £31 billion of private investment, including £22 billion from Microsoft to expand UK AI infrastructure and build the country's largest AI supercomputer.
Why this matters for government leaders
- Trust drives adoption: TBI finds trust correlates with regular use, which is concentrated among younger, wealthier men.
- Adoption is shallow: More than half of adults say they did not use AI in the last year.
- Main blocker: 38% cite distrust in AI outputs as the top barrier.
- Political risk: Without public involvement in how AI is built and governed, delivery will stall and savings targets will slip.
What TBI recommends
- Involve the public early: Open up development and testing, including inviting the public into AI labs.
- Safety first: Require thorough safety testing before deployment and ongoing monitoring in live services.
- Communicate outcomes, not tech: Focus on benefits to daily life, not model specs and jargon.
- National AI training: Roll out practical skills for people from all backgrounds, not just early adopters.
Government's current position
A Government spokesperson said AI can boost the economy and improve services, noting plans to partner with leading tech firms to deliver AI skills training to 7.5 million workers by 2030, with around 10 million expected to use AI in their roles by 2035. The Government has also launched an AI Assurance Roadmap to build trust and increase adoption across the economy.
Actions for departments in the next 12 months
- Publish a clear use policy: Define where AI will and will not be used in your services, including red lines and escalation paths.
- Stand up citizen panels: Run citizen juries or focus groups for high-impact AI projects; publish what changed based on their input.
- Pilot in low-risk, high-volume areas: Start with workflows like triage, summarisation, and routing; measure error rates and time saved.
- Mandate human-in-the-loop for decisions with consequences: Benefits, healthcare, policing-adjacent tasks, and immigration require review by trained staff.
- Require pre-deployment testing: Accuracy, bias, security, and stress testing under realistic conditions; document results.
- Procurement with assurance: Vendors must provide model cards, data provenance, evaluation results, and incident response plans.
- Data minimisation and privacy by default: Use the least data required; log all model inputs/outputs for audit.
- Staff training by role: AI literacy for all; deeper skills for caseworkers, analysts, and managers who approve AI-assisted work.
- Transparent service guarantees: Communicate what AI will do, expected accuracy, appeal routes, and response times.
- Independent oversight: Assign a departmental assurance lead and enable external scrutiny for high-risk systems.
Build trust into delivery
- Plain-language model cards: Publish purpose, limitations, known failure modes, and who is accountable.
- Bias checks by cohort: Test performance across age, gender, ethnicity, disability, and region; publish parity metrics.
- Red-teaming and incident logs: Simulate misuse and record real incidents; share lessons learned across departments.
- Human-readable decisions: Provide clear reasons for outcomes and easy appeal mechanisms.
- Align with national guidance: Use the UK's AI Assurance Roadmap for standards, assurance activities, and documentation.
UK AI Assurance Roadmap (GOV.UK)
Skills at scale
Workforce adoption will decide whether investment turns into real productivity. Pair national targets with departmental training that maps to actual tasks.
- AI literacy for everyone: Safety, privacy, prompt quality, verification, and accountability.
- Role-specific paths: Caseworkers (document handling), analysts (data/LLM tooling), managers (risk acceptance and audit).
- Communities of practice: Identify AI champions in each team and give them time to support peers.
- Union engagement: Co-design guidelines on job impact, reskilling, and redeployment.
Explore AI courses by job role (Complete AI Training)
Communications that land with the public
- Lead with outcomes: "Appointment wait times cut by X days," "Case processing improved by Y%."
- Show the guardrails: Explain testing, human oversight, and how to appeal.
- Open the doors: Host public demos and lab visits so people can see how systems are tested before use.
- Publish savings and reinvestment: Show where freed-up time and money will improve front-line services.
Metrics that decide funding
- Trust index: Public sentiment by region and demographic, tracked quarterly.
- Adoption and proficiency: Percentage of staff using AI weekly and passing role-based assessments.
- Service impact: Time to decision, backlog reduction, and error/complaint rates.
- Fairness and safety: Disparity metrics, incidents per 10k decisions, and time-to-mitigation.
Bottom line
Investment and deals create momentum, but public trust converts plans into outcomes. Involve people, prove safety, train the workforce, and report results in plain English. Do that, and AI can improve services and support growth with legitimacy.