Low pay stalls Whitehall AI rollout as talent shuns Civil Service
Government AI rollout stalls as pay caps block hires; i.AI spent just £5m of £12m amid unfilled roles. Progress needs flexible pay, faster hiring, and vendor partnerships.

Government AI rollout stalls: pay and hiring are the blockers
The government's push to scale AI across the Civil Service is slowing for a simple reason: the talent won't sign on at current pay. New research shows the i.AI unit spent just £5 million of its £12 million first-year budget, with only £3.7 million on staffing, because it couldn't recruit enough specialists.
The aim of i.AI is straightforward-automate routine work, speed up decisions, and lift productivity across Whitehall. The initiative was launched in November 2023 under then Deputy Prime Minister Oliver Dowden and later merged into a broader "blueprint for digital government" in January 2025. Before the merger, the unit had more than 40 staff, but fewer projects launched than planned.
What the data signals
- Budget: £12m allocated; £5m spent; £3.7m on staff-unfilled roles drove underspend.
- Policy intent: long-term target to reduce Civil Service costs by 15% using AI-enabled efficiencies.
- Hiring reality: many senior digital posts filled by non-technical staff; outdated processes slow down recruitment.
The pay gap problem
Reform, the Westminster think tank, concludes adoption will continue to stall without a fundamental shift in how government hires, pays, and partners for AI work. Market rates are far ahead of Civil Service bands: top machine learning engineers in industry can make 10x government pay. Some researchers have been reported at around $800,000 a year; UK interviewees estimated government would need packages near £650,000 to compete for certain roles.
That level is politically hard to justify. But ignoring the gap doesn't remove it-it just delays delivery. Leaders often find it easier to ask for bigger budgets than to execute a high-skill rollout that carries delivery risk.
Why adoption lags in practice
- Risk aversion limits even basic AI testing and pilots.
- Hiring workflows are slow and biased toward generalists for specialist roles.
- Pay frameworks don't flex for niche, high-demand skills like applied ML and AI safety.
- IT security and other specialist roles are paid close to non-technical roles, weakening recruitment and retention.
What this means for senior leaders
If you want AI benefits-faster casework, fewer backlogs, lower unit costs-you need the capability to build, buy, and govern AI safely. That means pay flexibility, targeted partnerships, and honest prioritisation. Without this, timelines slip and business cases don't materialise.
Practical moves you can make this quarter
- Define 3-5 high-value use cases with measurable outcomes (e.g., hours saved, queue time reduced, error rate cut). Fund those first.
- Stand up a small senior technical panel to gate major AI spend. Require a named technical owner for every project.
- Use market pay supplements, scarce-skills allowances, and fixed-term specialist contracts where policy allows.
- Borrow expertise: short-term secondments from industry and academia; co-deliver with vendors while building in-house skills.
- Run time-boxed pilots (6-12 weeks), measure impact, then scale what works. Stop what doesn't.
- Streamline hiring for AI-critical roles: pre-cleared job descriptions, fast-track sifts, technical assessments led by practitioners.
- Create a shared talent pool across departments for roles like ML engineers, data product managers, and AI security leads.
Pay options that fit within constraints
- Apply existing flex-market supplements, recruitment/retention allowances, and targeted bonuses tied to delivery milestones.
- Use specialist day-rates for short-term needs while you recruit permanent leads who set standards and reduce vendor lock-in.
- Offer hybrid packages: lower base plus clear mission, public impact, modern tools, and defined career progression for technologists.
Partnerships that accelerate delivery
- Co-develop with reputable vendors under strong data, privacy, and security terms; insist on knowledge transfer.
- Partner with universities and research labs for evaluation, safety, and benchmarking.
- Join cross-government working groups to reuse patterns, models, and guardrails rather than rebuilding from scratch.
Governance that keeps projects on track
- Standardise problem statements, data access, model evaluation, and red-teaming for safety and bias.
- Appoint an accountable SRO and a named senior technical owner for each project.
- Publish simple scorecards: impact, cost-to-serve, accuracy, failure modes, and user satisfaction.
Skills: build, don't just buy
You will still need internal capability to specify, evaluate, and operate AI safely-even with vendors. Set up role-based training for policy, operations, and technical teams, and require hands-on labs before production approvals.
If your department is planning structured upskilling, see role-specific options here: AI courses by job.
Context and sources
Reform's analysis highlights the need for higher pay flexibility and stronger partnerships to make AI delivery viable across the public sector. See: Reform think tank. Parliamentary scrutiny on digital capability and procurement continues via the Science, Innovation and Technology Committee.
Bottom line
You won't close a double-digit productivity gap on generalist staffing and static pay bands. Flex pay where you can, borrow expertise where you must, and build core skills so delivery sticks. Move a few high-value use cases to production, prove the savings, and use that proof to unlock the next round.