Federal AI use cases grow fivefold since 2023 but talent gaps and public distrust slow progress, Brookings finds

Federal agencies deployed 3,600+ AI use cases in 2025, a 69% jump from 2024, but over 85% of high-impact systems lack required risk documentation. Half of Americans now express more concern than excitement about AI, up from 37% four years ago.

Categorized in: AI News Government
Published on: Apr 17, 2026
Federal AI use cases grow fivefold since 2023 but talent gaps and public distrust slow progress, Brookings finds

Federal Agencies Triple AI Deployments, But Talent and Trust Gaps Threaten Progress

U.S. federal agencies deployed more than 3,600 AI use cases in 2025-a 69% jump from 2024-but structural obstacles are slowing the technology's integration into government services, according to a Brookings Institution report released Wednesday.

The analysis examined AI inventories from 2023 to 2025, federal hiring data, Office of Management and Budget memoranda, and interviews with current and former federal technologists across eight agencies.

Growth Concentrated in Large Agencies

The expansion is dramatic but uneven. Five large agencies account for over half of all reported use cases. Smaller agencies lag significantly: the 11 small agencies that reported in 2025 submitted just 60 use cases, or 2% of the total inventory.

The Social Security Administration uses AI primarily for service delivery and benefits processing. The Department of Justice focuses on law enforcement applications. The 2023 baseline was just 720 use cases.

A Talent Pipeline Problem

Federal agencies posted more than 56,000 technical job listings since 2016, but fewer than 3%-just over 1,600-explicitly required AI skills.

A Biden-era hiring surge aimed to fill this gap. That effort may have stalled. At least 25% of AI-specific job postings occurred from 2024 onward, meaning newly hired AI workers could have been among those dismissed in early 2025 workforce reductions.

Deployment Caution and Accountability Gaps

Nearly 60% of all AI use cases remain in pilot or pre-deployment stages. This suggests agencies are moving cautiously-a reasonable approach, but one that slows progress.

Accountability problems compound the issue. More than 85% of high-impact deployed AI systems in 2025 lack required documentation about risk mitigation, despite explicit Office of Management and Budget requirements.

The Trump administration's stated intention to link AI deployment to workforce cuts through the Department of Government Efficiency may be reinforcing agency hesitancy to move forward.

Public Trust Eroding

About half of Americans now express more concern than excitement about AI, up from 37% four years ago. Just 17% believe AI will positively affect the country in the next two decades.

Federal government approval ratings remain near historic lows. Only 16% of Americans say they trust Washington to do what is right most or nearly all of the time. Poor AI deployments could damage that trust further. Well-designed applications that deliver tangible service improvements could rebuild it.

What Needs to Change

Brookings recommends four actions:

  • Expand AI literacy training across agencies
  • Reform procurement rules built for static software systems
  • Strengthen transparency practices around high-risk AI applications
  • Prioritize use cases that produce clear, measurable public benefits

The report frames the challenge as both a risk and an opportunity. Agencies must move faster on talent acquisition and accountability measures. At the same time, they need space for experimentation without the threat of workforce cuts tied to deployment decisions.

For government professionals, the takeaway is clear: AI adoption is accelerating, but success depends on building internal expertise and public confidence simultaneously. Both require sustained commitment and resources.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)