The real cost of the UK's "Free AI Training for All" is democratic voice
On January 28, the UK government launched "Free AI training for all," fronted by the Secretary of State for Science, Innovation and Technology. It follows last year's pledge of £187 million for AI skills and partnerships with NVIDIA, Google and Microsoft under the AI Opportunities Action Plan. The goal sounds inclusive. The delivery points somewhere else.
Here's the issue: the state is outsourcing AI literacy to the very firms the public mistrusts, while overlooking independent UK institutions that have built vital digital and media literacy for years. The result is worker-focused product onboarding, not citizen-focused education.
What the AI Skills Hub actually offers
- A "bookmark" site indexing hundreds of courses with weak quality control: reports of fake listings, dead links, high-cost content, and material misaligned with UK law (e.g., IP).
- Estimated multi-million-pound build with little transparency on spend or procurement standards.
- "Foundation" courses promoted as the default pathway - all from large US providers, geared to their platforms.
- Positioned as "for all," but targeted at workers in selected sectors, including the creative industries - even as UK creators push back against unfair use of their work.
Government materials on the Hub acknowledge people don't use AI because they don't trust AI companies. Yet the solution offered is more of the same companies, with no clear oversight, guardrails, or route for public challenge.
Public trust and sovereignty are at stake
The UK risks trading strategic independence for vendor dependency. Lock-in happens quietly: "free" skills, institutional deals, and default tools that become hard to switch off. Meanwhile, public attitudes data show deep caution about Big Tech's role in the public interest. See the independent work from the Ada Lovelace Institute.
This isn't just about tools. It's about who sets the agenda for education, work, and civic life. Government agreements with large AI companies across justice, defence, security, and edtech raise serious questions: conflicts of interest, legal exposure, and a narrowing of democratic oversight.
Skills are not literacy
Tool training creates better users. Literacy creates informed citizens. The Hub focuses on the former and sidelines the latter.
- What's missing: rights, consent, data protection, IP, collective bargaining, environmental costs, and when AI should be refused.
- What's needed: independent, accessible materials; community-delivered learning; and space for people to choose if, when, and how AI is used.
Practical guardrails for departments, councils, colleges
- Procurement
- Mandate vendor neutrality. No "foundation" pathways tied to one company.
- Publish costs, course lists, QA process, and conflict-of-interest statements.
- Require UK-law alignment reviews (IP, equality, employment, data protection).
- Build exit plans: interoperability, data portability, escrow, and time-boxed pilots.
- Data protection and safety
- Run DPIAs, set DPAs, and document data flows for any tool touching personal data.
- Set policies on training data provenance, content reuse, and "AI poisoning" risks.
- Follow regulator guidance on generative AI and data protection from the UK ICO.
- Worker voice and ethics
- Co-design with unions, educators, and frontline staff before deployment.
- Offer opt-outs and non-AI alternatives for sensitive tasks.
- Create a clear path to report harms, bias, and workload inflation.
- Curriculum scope
- Teach rights, risks, and remedies - not just prompts and product features.
- Localize to UK law and sector norms; include creative sector IP and consent.
- Add "Should we use AI here?" decision checklists.
- Measurement
- Shift KPIs from enrollments to outcomes: user confidence, reduced complaints, accessibility, fairness, and cost transparency.
Invest where trust already lives
Fund UK-based colleges, libraries, unions, adult education, community groups, and civil society organizations to deliver independent AI literacy. They know local needs, can teach in context, and can bring in critical perspectives that vendors won't.
Pair this with an open, maintained catalogue that discloses costs, provenance, and legal fit - and removes low-quality or misaligned content fast. Keep Big Tech in the classroom as one voice, not the only voice.
Tech town experiments need civic guardrails
Turning places into testbeds without clear consent, evaluation, and exit criteria is a fast route to dependency. If you pilot "smart" or AI-enabled services, publish success measures upfront, cap contract terms, and allow communities to say no. Growth that erodes trust is not growth that lasts.
What leaders can do this quarter
- Freeze "foundation" endorsements tied to a single vendor; issue interim vendor-neutral guidance.
- Commission independent AI literacy modules focused on rights, risk, and professional judgment.
- Stand up a cross-sector QA panel (education, labour, law, accessibility, civil society) to review all Hub listings.
- Require union and workforce consultation before AI adoption in any public service workflow.
- Publish spend, suppliers, and legal reviews for the AI Skills Hub and related deals.
If you still need a quick view of course options
Use open catalogues that help compare providers by role and skills instead of funnelling learners to one stack. Example: a neutral overview by job can help planning and budget checks here.
The bottom line
AI capability matters. But capability without independence and consent trades public value for vendor value. Build skills, yes - and build literacy, safeguards, and a real voice for the people expected to live with the outcomes.
Your membership also unlocks: