Government AI 'Consult' cuts consultation analysis from weeks to hours
Policy teams are getting time back. Consult, an AI tool built inside the UK Government's Humphrey programme, analysed more than 50,000 consultation responses in around two hours at a cost of £240. Its output then took human experts 22 hours to review and validate - work that previously stretched over weeks.
The model agreed with human analysts 83% of the time. For context, two separate human teams doing the same task manually aligned just 55% of the time. That accuracy-to-speed ratio is where the value shows up for overloaded departments.
Key numbers that matter for government teams
- ~50,000 responses grouped into themes in ~2 hours
- £240 processing cost for the initial AI pass
- 22 hours of expert validation vs weeks of manual sorting
- 83% model-human agreement vs 55% human-human agreement
- Projected savings: 75,000 staff days per year (~£20 million)
Consult informed the Independent Water Commission's review of the water sector, which fed into the decision to abolish Ofwat. It has also supported major exercises such as the Scottish Government's consultation on non-surgical cosmetics and the Digital Inclusion Action Plan, helping teams surface themes and focus human effort where it counts.
What this means for your directorate
This is a practical path to clear backlogs and reallocate analysts to higher-value work. As Digital Government Minister Ian Murray put it: "By taking on the basic admin, Consult is giving staff time to focus on what matters - taking action to fix public services. In the process, it could save the taxpayer hundreds of thousands of pounds."
If your unit runs consultations, you can treat AI as a first-pass triage. The human role shifts from sorting to verifying, interpreting, and acting.
Inside the Humphrey programme
Consult sits within Humphrey - a suite of secure, in-house AI tools for civil servants. Another tool, Redbox, previously helped more than 5,000 officials summarise documents and draft briefings. While Redbox has been open-sourced and development has ended, its technology has fed into newer work, including GOV.UK Chat, which will be trialled in the GOV.UK App.
Engineers are now building 'AI Exemplar' projects to speed up planning decisions, support probation officers, and improve frontline delivery across departments. The aim is simple: apply AI to routine, high-volume administrative work so policy professionals can focus on insight and action.
How to adopt this approach in your team
- Define the taxonomy early: set clear themes and sub-themes before ingestion to improve clustering quality.
- Run an AI first pass: batch the responses, generate themes, and produce exemplar quotes for each theme.
- Validate with a sampling plan: spot-check a representative sample; measure precision/recall and agreement rates.
- Establish thresholds: decide in advance what agreement level triggers re-runs or manual intervention.
- Keep an audit trail: log prompts, model versions, datasets, and decisions for assurance and scrutiny.
- Address privacy and equalities: strip personal data where possible; audit for bias and document mitigations.
- Publish a transparent methodology note: explain your process to maintain trust with respondents and ministers.
Where to go next
For policy and digital leads shaping AI use, start with practical guidance on responsible deployment in government. The UK guidance on public sector AI is a solid baseline for risk, assurance, and transparency.
If your organisation needs to upskill staff on AI literacy and prompt practices, structured learning can speed adoption and cut errors during validation.
The direction is clear: use AI for the heavy lifting on volume, keep humans on judgement, and document everything. That mix is how you move faster without losing rigour.
Your membership also unlocks: