Federal AI Use Surges 70% as Government Shifts Disclosure Rules
The U.S. government reported roughly 3,600 AI use cases across agencies in 2025, a nearly 70% jump from the previous year, according to the Office of Management and Budget's latest inventory published on GitHub. The increase reflects both expanded reporting and actual new deployments as the Trump administration pushes agencies to adopt the technology with fewer restrictions than under the Biden administration.
The accounting includes pre-deployment projects, pilots, active operations, and retired systems. It excludes Defense Department and intelligence community uses. About 9% of reported uses have been retired.
NASA's Reporting Shift Skews the Numbers
NASA's inventory jumped from 18 use cases in 2024 to 425 in 2025 - a 2,260% increase that accounts for much of the government-wide growth. NASA spokeswoman Jennifer Dooren said the spike reflects better reporting, not new strategy. The agency began including research and development use cases that hadn't previously been counted.
The Department of Energy similarly reported a 330% increase, also attributing it to clarified reporting on R&D work. Without NASA's expansion, the government would still have added roughly 1,000 new uses across other agencies.
Brookings Institution fellow Valerie Wirtschafter said the overall increase is "probably both" improved reporting and genuine new uses. "And also NASA," she added.
Science Becomes Top Use Category
NASA's surge made science-related applications the most common use case reported in 2025, followed by administrative functions, IT, law enforcement, and health and medical. The shift matters because it changes how the government's AI work is categorized year to year.
The Department of Health and Human Services maintained its position as the agency with the most AI applications, followed by Veterans Affairs, Energy, and Justice - all reporting more than 300 uses.
High-Impact Uses Remain Murky
The Trump administration renamed the Biden-era categories "rights-impacting" and "safety-impacting" to a single "high-impact" classification. These are uses that trigger minimum risk management requirements.
High-impact uses now represent 12% of the inventory, down from roughly 16% the previous year. The VA, DOJ, Homeland Security, and Energy report the highest proportions, but variation between agencies is substantial.
A new classification for uses "presumed high impact but determined not to be" has raised questions among analysts. About 110 use cases fall into this category. Agencies often cite that AI isn't the "principal basis" for a decision when excluding systems from high-impact designation.
Sorelle Friedler, a computer science professor at Haverford College who helped shape Biden-era AI policy, said that language is too narrow. AI systems typically inform human decision-makers rather than making decisions alone, making the "principal basis" test difficult to apply consistently.
Chatbots Lead Commercial Adoption
More than three-quarters of CFO Act agencies deployed at least one major AI chatbot - ChatGPT, Copilot, or Gemini - to at least 10,000 employees in 2025. This marks the first year agencies separately inventoried commercial AI products alongside custom systems.
Eric Ueland, OMB's deputy director for management, said the government is finally able to "take advantage of cutting-edge technology as it's rolled out into the marketplace" without lengthy acquisition delays.
At HHS, agencies including the Administration for Children and Families are using AI tools to identify positions and grants that conflict with executive orders on diversity, equity, and inclusion.
Adoption Remains Uneven Across Government
Large agencies are scaling AI faster than small and midsized ones, according to Wirtschafter's analysis. This gap likely reflects differences in resources and mission, not just willingness to experiment.
Transparency issues persist. Not all agencies maintain required IDs that track use cases year to year. Some agencies also provided less detail in 2025 than in 2024. The Veterans Affairs, for example, previously outlined risk management protections but removed that detail from the latest submission.
Friedler said the reduced transparency gave her "a lot less confidence" that agencies were taking appropriate care with their systems.
Reporting Delays and Compliance Changes
The 2025 reporting deadline was pushed from December to January due to the federal government shutdown. In April, agencies had to bring high-impact uses into compliance with minimum risk management practices or discontinue them - a deadline that prompted some to alter their disclosures and add detail to existing entries.
The OMB requirement to publicly disclose AI use originated in an executive order during the final days of the first Trump administration. Congress later made it law, and the Biden administration expanded the process. The second Trump administration has modified the rules but maintained the core disclosure framework.
For government professionals implementing these systems, understanding both the formal requirements and the shifting definitions of "high-impact" use remains critical. Consider exploring AI for Government resources or the AI Learning Path for Policy Makers to stay current with evolving governance standards.
Your membership also unlocks: