OECD urges high-benefit, low-risk AI in government to build trust
OECD urges governments to pursue high-benefit, low-risk AI with measurement, data, and guardrails. Start with HR and service improvements, engage staff, and scale what works.

OECD to governments: Focus AI on high benefit, low risk
Governments should prioritise AI projects that deliver clear public value with minimal downside to service outcomes and trust. That's the core message of a new OECD report, Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions, published on 18 September 2025.
The OECD notes that many public bodies still lack the processes to measure results end-to-end - including efficiency of spend, service quality, and potential harms. Building this capability should be a first-order task to reach an initial level of maturity and to scale what works.
What "high-benefit, low-risk" looks like
Start where the value is obvious and the risk is manageable. Think: reducing backlogs, improving response times, triaging requests, and supporting staff with better information at the point of need. Avoid systems that make high-stakes decisions without strong oversight, quality data, and clear accountability.
The OECD frames this as a practical path to responsible adoption now, while remaining ready for future shifts in technology and policy.
The three pillars for trustworthy AI in government
The report's Framework for Trustworthy AI in Government rests on three pillars that work together as a system:
- Enablers: Quality data, digital and AI skills, sustainable funding, and modern infrastructure.
- Guardrails: Transparency, accountability, and risk tools that set clear limits and build auditability.
- Engagement: Ongoing consultation with citizens, civil servants, and cross-border partners.
Used together, these pillars help public organisations make responsible choices, reduce implementation risks, and scale success beyond pilots.
Trust risk: Over-reliance and faulty outputs
OECD public governance director Elsa Pilichowski warned that over-reliance on AI can break trust, especially where flawed data drives faulty conclusions. Past failures - such as wrongful fraud or debt accusations - have triggered lasting public backlash.
She also cautioned that moving too slowly can damage trust by locking in services that no longer meet public needs. The path forward: adopt responsibly and fast enough to improve outcomes citizens can feel.
Where AI can help now: civil service reform
Across 200 cases in 11 core government functions, the report highlights strong near-term potential in HR and workforce productivity. Examples include personalised learning, recruitment support, and automating routine admin to free time for higher-value work.
However, too many initiatives remain isolated pilots. Scaling requires a clear strategy, shared standards, and common building blocks across departments.
Case in point: France's HR approach
France offers a more coherent model: a strategy that links AI integration, workforce planning, and training for civil servants. It combines upskilling with ethical guidelines that connect HR practice and digital ethics.
Key features include clear objectives, careful tool selection, transparent methodologies, risk mapping, and internal audits. The result: better oversight and a workforce prepared to work with AI, not be replaced by it.
Close the data and skills gap
To unlock better matching of people to roles and to predict performance fairly, governments need stronger HR data: job demands, workforce characteristics, and practical indicators of performance. Many systems don't capture this with the depth or quality required.
Investment is needed in data foundations and HRM skills, alongside privacy, ethics, and audit capabilities. Without this, AI will struggle to add real value at scale.
Engage your workforce early and often
Roles and responsibilities will shift as AI rolls out. The OECD calls for transparent dialogue with public servants and unions, with clear communication on goals, impacts, and safeguards.
Social dialogue and collective bargaining help build trust, protect labour rights, and ensure access to training so staff can work effectively with AI.
Practical next steps for public leaders
- Define 3-5 high-benefit, low-risk AI use cases tied to mission outcomes and service metrics.
- Stand up guardrails: risk assessments, human-in-the-loop review, audit trails, and public transparency notes.
- Invest in data quality and access (start with HR, case management, and service intake data).
- Create a cross-department AI steering group to standardise procurement, assurance, and reporting.
- Engage unions and staff councils early; publish clear role impacts and escalation paths.
- Run small pilots with measurable targets; scale what works and sunset what doesn't.
- Upskill teams with role-relevant training and practical exercises. See curated learning paths by job at Complete AI Training.