Governance, growth and the AI question for government
AI is the most disruptive technology the public sector has faced in decades. But the hard part isn't pilots or procurement-it's governance. At the GDSA Summit, leaders from HMRC, DWP Digital, the Royal Society, Kainos and AWS were clear: if government wants benefits without blowback, it needs to plan earlier, set guardrails and embed sustainability from day one.
The shift: AI's disruption is a governance test
Past waves-VR, blockchain, crypto-came and went without serious talk about environmental impact. This time is different. "Nowhere along any of those previous disruptive technologies did I hear about sustainability or the environment being mentioned," said Ishmael Burdeau, lead sustainability business architecture at DWP Digital. AI is forcing a broader conversation: not just what we can build, but what it costs society and the planet.
From quick wins to service redesign
Early government AI has chased operational gains. At HMRC, 37-38 million calls a year mean advisors spend huge time writing summaries. "Having AI listen to that call, summarise it⦠that's the advisor in about a fifth of the time," said Jeremy Davis, deputy director, head of user-centred design at HMRC. That scale adds up fast.
But the bigger prize isn't shaving minutes; it's reshaping services. "Rather than getting 37 million calls from people who almost certainly do not want to call⦠how do we proactively reach out to people by understanding the data that we hold?" Davis asked. The challenge: rising above tactical targets and designing for outcomes citizens actually want.
Growth vs sustainability: set guardrails
Incentives matter. "The nature of Silicon Valley and the AI systems that we currently are developing is in pursuit of hundreds of billions of dollars of growth for the sake of growth itself," said Alison Griswold, senior policy adviser at the Royal Society. That growth-at-all-costs story clashes with environmental limits.
"It's not that all growth is at odds with sustainability," she said. "But endless, unfettered growth is inherently at odds with sustainability and living within constrained planetary boundaries." Also, beware the sales pitch that every new technology is inevitable and essential. Separate the narrative from what's genuinely useful for public value.
Build sustainability in from day one
Governance isn't a brake-it's the rails innovation runs on. "Governance is not a popular topic," said Seto Adenuga, AI governance & ethics manager at Kainos. "But it's actually there to promote and support innovation." The fix: embed sustainability alongside security, privacy and safety from the first design workshop, not as a late-stage compliance check.
Start with a basic question: do you even need AI? As Adenuga put it, asking early "do we really need this tech?" avoids avoidable costs, risks and environmental load later in the lifecycle.
What providers owe the public sector
Vendors have a role too. "We have to build this into our development from the start," said Faye Holt, director, pan-UK public sector at AWS, referring to sustainability. Providers can use AI to improve infrastructure efficiency-datacentre cooling, energy scheduling, and water use among them. The point: performance and sustainability aren't trade-offs if they're engineered together.
Prepare now: a practical playbook
If you lead policy, delivery or procurement, use this as your starting point:
- Start earlier than you think. Don't wait for a mandate; run horizon scans, tabletop exercises and policy sprints now.
- Define the problem clearly. Write the user outcome, failure modes and "do nothing" baseline before you mention models.
- Choose the smallest effective tool. Rule-based, search or analytics might solve 60% of use cases with far less risk.
- Embed governance in design. Document data sources, legal basis, DPIA, model choice, energy footprint and human oversight up front.
- Set sustainability targets. Track emissions per 1,000 inferences, energy per training run and datacentre water intensity. Make the target a go/no-go gate.
- Design for service change, not just efficiency. Use data to reduce avoidable demand (fewer inbound calls, fewer failed transactions), not just speed up today's tasks.
- Prioritise high-leverage interventions. Proactive outreach, eligibility pre-checks and personalised guidance can cut friction at scale.
- Build human-in-the-loop by default. Define what staff see, when they can override and how decisions are explained to citizens.
- Plan exit ramps. Specify model swap criteria, vendor lock-in limits and data portability before deployment.
- Publish what you can. Model cards, risk summaries and evaluation methods build trust and reduce surprises.
- Pilot with policy alignment. Test with real users, real constraints and real red lines-then scale gradually.
- Audit continuously. Monitor bias, drift, error rates, system emissions and incident reports; tie them to escalation paths.
Metrics that matter
- Citizen outcomes: reduced avoidable calls, faster case resolution, fewer repeat contacts, higher benefit take-up.
- Service quality: accuracy, false positive/negative rates, appeal rates, advisor time saved that's reinvested in complex cases.
- Fairness and safety: subgroup performance parity, override frequency, incident count and severity.
- Sustainability: carbon per inference, total kWh, datacentre water use, embodied emissions for major training runs.
- Governance health: number of systems with model cards, DPIAs completed, red-team exercises conducted, time-to-mitigation.
Questions to ask before funding an AI project
- What citizen outcome improves, by how much and by when? How will we prove it?
- Is AI the minimal tool that solves the problem? What's the non-AI alternative and its cost/benefit?
- What data do we truly need? Can we minimise, anonymise or process locally to reduce risk and emissions?
- What are the environmental limits for this project (energy, water, carbon), and who owns them?
- How will we govern model changes, vendor updates and emergent risks over time?
Useful frameworks
Don't start from scratch. Align your controls to established guidance:
- NIST AI Risk Management Framework for risk functions, controls and measurement.
- UK Data Ethics Framework for principles on fairness, accountability and transparency.
Skills and capacity
Policy, design and delivery teams need a common playbook. If you're building capability across departments, see the AI Learning Path for Policy Makers for governance, risk and practical deployment foundations.
Looking ahead
"Start earlier and get ahead of it," Davis said. Burdeau expects infrastructure to shift too: "I'm hoping at some point the whole datacentre conversation becomes less contentious. It should move more to things on the edge-more stuff in people's devices and homes."
Whatever arrives next, the pattern holds. Define outcomes, embed governance and sustainability from the first sketch, and build services citizens don't have to call about. Do that, and AI stops being a headline and starts being useful public infrastructure.
Your membership also unlocks: