Most government agencies deploy AI without adequate trust safeguards, global study finds

Only 6% of government agencies have both high confidence in their AI and evidence it actually works safely - the lowest rate of any industry. A new SAS/IDC report found 38% of agencies trust AI systems that haven't been fully validated.

Categorized in: AI News Government
Published on: Apr 28, 2026
Most government agencies deploy AI without adequate trust safeguards, global study finds

Government agencies deploy AI faster than they can trust it, global study shows

Government organisations are adopting artificial intelligence at rates that outpace their ability to validate whether those systems work safely and fairly. A new report from SAS and IDC found that only 6% of government agencies have both high confidence in their AI systems and evidence those systems are actually trustworthy - the lowest figure across all industries surveyed.

The misalignment creates what researchers call the "trust dilemma": agencies either underuse reliable AI because they doubt it, or over-rely on unproven systems. Thirty-eight percent of government organisations fall into the second category, placing strong confidence in AI that hasn't been fully validated.

Generative AI creates false confidence

Government leaders trust generative AI more than traditional machine learning, despite the opposite being justified by evidence. Machine learning has years of proven use in tax and fraud detection. Generative AI is less explainable and more error-prone, yet scores higher on trust measures among public sector respondents.

Only 15.3% of government organisations operate at the highest level of trustworthy AI practices, compared to the global average of 19.8%. Banking and insurance organisations significantly outpace the public sector in both current trustworthy AI implementation and planned investment.

Three barriers block progress

Every region surveyed - North America, Europe, Latin America, the Middle East, Africa, Turkey, and Asia-Pacific - cited the same top obstacles:

  • Lack of centralised or optimised data foundations
  • Absence of clear data governance frameworks
  • Skills gaps, particularly among general employees rather than specialist technical staff

Government agencies plan to increase AI spending significantly. Nearly half expect investment growth between 4% and 20% in the coming year, with 12.6% anticipating increases above 20%.

The stakes are different in government

Public sector AI decisions affect citizens directly. A biased loan denial or incorrectly flagged benefit fraud case carries consequences that differ from commercial applications. Yet government lags in the foundational work - data architecture and governance - that makes AI trustworthy.

Grant Brooks, senior vice president of public sector at SAS, said: "For the public sector to rely on AI, it must deliver clear value while protecting the wellbeing of citizens. The report findings suggest we have work to do to achieve that."

Government organisations recognise the problem. Most prioritise investments in technology architecture and workforce skill development. The question is whether they can close the gap between AI adoption speed and trustworthy implementation before operational failures erode public confidence.

Read more about AI for Government and the specific challenges of deploying Generative AI and LLM in public sector contexts.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)