HHS's AI Strategy: A Live Test of Management-Based Regulation Inside Government
Regulators have been telling agencies and vendors how to manage AI risk for years. HHS just pointed that lens inward. Its new department-wide AI strategy is a public attempt to govern a fast-growing portfolio of tools inside one of the largest organizations in the federal government.
Released on December 4, 2025, the 20-page strategy and companion plan aim to make AI a "practical layer of value" across operations, research, and public health. HHS expects roughly a 70 percent increase in AI projects in FY 2025. Scale like that demands a system that can keep pace without creating blind spots.
The policy context
This move fits a broader federal push. President Donald J. Trump's America's AI action plan and the Office of Management and Budget's AI memo urge agencies to adopt AI while standing up governance structures and inventories for systems that affect safety or rights. A more recent executive order seeks to unify national AI policy.
HHS's strategy, paired with its AI compliance plan, is one of the clearest looks at how a large department is translating those directives into day-to-day internal governance.
What HHS plans to build
- Governance and risk management: An AI governance board, enterprise inventories, criteria for "high-impact" systems, documented assessments, independent reviews, pre-deployment testing, and lifecycle monitoring.
- Shared infrastructure and platforms: Common tooling and data services so programs don't reinvent the wheel-and so controls aren't fragmented.
- Workforce and burden reduction: Training, guidance, and process redesign to reduce administrative drag while maintaining guardrails.
- "Gold standard" research: Clear processes for using AI in scientific work, with reproducibility and transparency requirements.
- Modernized service delivery: Applying AI to public-facing services while tying oversight to risk tiers.
The plan maps its controls to the NIST AI Risk Management Framework and borrows the language of management systems-metrics, continuous improvement, and lifecycle oversight. It resembles an internal AI management system rather than a loose set of program-by-program rules.
Why this is management-based regulation in practice
Management-based regulation asks organizations to build effective internal risk controls instead of following a long list of prescriptive rules. That approach fits AI, where models, data, and use cases change fast.
The upside: flexibility, faster learning, and clearer lines of accountability. The risk: box-ticking. Without strong oversight capacity and a culture that supports saying "no," paperwork can expand while real control thins out.
The make-or-break issue: surfacing policy choices hidden in "technical" settings
Seemingly small configuration decisions are policy: how to define a complaint, thresholds for risk scores, and tradeoffs between false positives and false negatives. If these choices stay buried in model cards and code, they bypass normal administrative scrutiny.
HHS commits to plain-language public summaries for high-impact systems and significant waivers, along with metrics for transparency and reproducibility. That's a start. The open question is how deeply those summaries will reach into design decisions-especially when tools are acquired from external partners. Without methods that expose measurement choices and tradeoffs, crucial policy calls can remain hidden in technical documentation.
Metrics: helpful signal or speed run?
HHS proposes indicators like the percent of high-impact systems that complete independent review or the average time to respond to malfunctions. These are useful for auditors and inspectors general who want proof that governance is real.
They can also create pressure to "get through the checklist," particularly with a projected 70 percent jump in use cases. In that environment, incentives matter. The system must reward stopping, redesigning, or sunsetting projects-not just shipping them.
The operational challenge most agencies underestimate
Drafting pillars is easy. Building inventories is manageable. The hard part is wiring the triggers so every new AI proposal automatically routes to the right reviewers-and ensuring those reviewers have both the authority and the confidence to halt or reshape work.
The more leadership encourages AI adoption, the more valuable that "stop" button becomes.
What other agencies should do now
- Make governance enterprise-wide: Individual program offices cannot enforce standards on their own. Stand up a central board with a clear mandate that links infrastructure, data governance, procurement, and risk management.
- Anchor to recognized frameworks: Map controls to the NIST AI RMF and, where useful, to AI management system standards such as ISO/IEC 42001. This gives auditors and oversight bodies something concrete to test.
- Treat transparency as a control: Publish the governance model, risk tiers, and compliance plan-not just high-level pillars. Make plain-language summaries routine for high-impact systems and waivers.
Key questions for leaders and oversight bodies
- How many high-impact systems were independently reviewed last year-and how many required redesign?
- How many plain-language public summaries were published?
- How often did assessors halt or suspend a project due to unresolved risks?
- Are tradeoffs (e.g., false positives vs. false negatives) documented in terms the public can understand?
- For vendor tools, where do HHS's responsibilities begin and end, and how are assurances verified?
Bottom line
HHS is running one of the first department-scale experiments in building an AI management system inside the federal government. Other departments will borrow its templates and language-and they should also stress-test them.
If management-based AI governance is going to work across government, it has to work inside the agencies that champion it. That means real authority, visible choices, and the willingness to slow down when risk outruns reward.
Further resources
Your membership also unlocks: