HHS Puts AI at the Heart of Health Innovation with a OneHHS Five-Pillar Strategy

HHS is moving AI from pilots to daily operations, with shared platforms, standards, and oversight. Early wins like FDA's agentic tools hint at faster, safer care and research.

Published on: Dec 11, 2025
HHS Puts AI at the Heart of Health Innovation with a OneHHS Five-Pillar Strategy

HHS Puts AI at the Center of Health Innovation

On Dec. 4, 2025, the U.S. Department of Health and Human Services released a 21-page AI strategy that moves AI from pilots to core operations across the Department. It follows the AI Action Plan, Executive Order 14179 and OMB memoranda M-25-21 and M-25-22. The plan is led by Acting Chief AI Officer Clark Minor and is consistent with the Trump Administration's AI policy goals. The goal is simple: embed AI into internal operations, scientific research and public health programs to cut friction and improve outcomes.

Why this matters

This is a shift from scattered experiments to a coordinated, department-wide system. HHS is creating shared platforms, common rules and reusable models so every division can move faster with less duplication. The message to industry is clear: bring solutions that deliver measurable value and can operate within strong governance.

The five pillars of HHS' AI strategy

  • Governance and risk management: Clear oversight structures, ethics and civil rights safeguards, transparency on use cases and annual public reporting. AI must be trustworthy and accountable across programs.
  • Infrastructure and platform design: A unified "OneHHS" AI stack with cloud, data, scalable tools and security so teams can develop, share and deploy AI efficiently while protecting sensitive data.
  • Workforce development and burden reduction: Train and recruit AI talent, stand up new roles (data scientists, ML engineers, project managers) and introduce assistants that remove administrative work so staff can focus on high-value tasks.
  • Research and reproducibility: Use AI to accelerate biomedical research while holding a high bar for validation and repeatability so results can be trusted by the scientific community.
  • Modernization of care and public health delivery: Apply AI to clinical decision support, surveillance and program delivery to improve accuracy, access and program impact.

"OneHHS" in practice

Every division-FDA, CDC, CMS, NIH and others-is part of a shared ecosystem. HHS will catalog AI use cases across the enterprise and encourage reuse of code and models. Where lawful, solutions will be released as open source so successful tools can scale beyond a single program.

FDA's early move shows what's coming

On Dec. 1, 2025, FDA launched a secure, agency-wide agentic AI platform for employees. It supports complex, multistep tasks like scheduling regulatory meetings, assisting with pre-market reviews and helping automate parts of post-market surveillance and inspections. Human oversight is built in and participation is optional, signaling how AI will be adopted across HHS with safeguards.

For context on FDA's AI priorities, see FDA's AI and ML page: FDA: Artificial Intelligence and Machine Learning.

Key directives and actions for HHS divisions

  • Establish strong governance: A Department-wide AI Governance Board, led by Deputy Secretary Jim O'Neill, will oversee policy and major decisions. A cross-division AI Community of Practice drives execution from the ground level so strategy and implementation stay in sync.
  • Update policies for speed and safety: The CIO is streamlining IT policies and accelerating ATO for AI systems while keeping NIST-aligned security controls. The goal: remove outdated barriers without lowering standards.
  • Catalog and share use cases: HHS will maintain an enterprise inventory. FY 2024 included 271 active or planned use cases, with ~70% growth expected in FY 2025. Divisions must use a common taxonomy and share custom code/models internally and, where permissible, as open source.
  • Manage risk and comply with OMB timelines: Divisions must identify high-impact AI systems and meet minimum risk practices-bias mitigation, outcome monitoring, security and human oversight-by April 3, 2026. If a system can't meet safeguards, it pauses until compliant.
  • Adopt the NIST AI RMF: HHS is basing guidance on the NIST AI Risk Management Framework and will refine internal practices as standards evolve. Continuous monitoring is expected after deployment, not a one-time check.
  • Invest in people and skills: New roles, focused hiring and tiered training (from literacy to advanced development) will build an AI-ready workforce. Expect internal forums, webinars and talent exchanges to spread know-how.
  • Measure and report impact: Divisions will tie AI initiatives to annual performance plans. Metrics include process improvements, cost/time savings, staff training rates and health outcomes. HHS will publicly share the use case inventory and major risk assessments.

Reference for risk management standards: NIST AI Risk Management Framework.

What leaders should do now

  • Appoint an AI lead and a working group to coordinate with the HHS AI Community of Practice.
  • Map current and planned AI use cases to the enterprise inventory taxonomy; prioritize reuse over net-new builds.
  • Identify potential high-impact systems; stand up bias testing, outcome monitoring and human-in-the-loop controls ahead of the April 3, 2026 deadline.
  • Budget for shared infrastructure: cloud, data access patterns, secure model hosting and monitoring.
  • Stand up a training plan across roles; pair literacy for most staff with deeper tracks for technical teams.
  • Ask vendors for compliance evidence (NIST AI RMF alignment, security controls, validation studies, audit logs).
  • Tighten data governance: provenance, consent, retention and access controls for sensitive data.
  • Write an internal policy for code/model sharing and open source releases consistent with legal limits.

The bigger signal

HHS is moving fast on two tracks: maturing governance while deploying practical tools that remove bottlenecks. The first wave focuses on internal productivity and decision quality, but the groundwork opens the door for public-private collaboration and scaled use across programs. For vendors and research leaders, this is the moment to present proven solutions that are safe, auditable and easy to reuse across agencies.

Upskilling the workforce

If you're planning internal training or role-specific development paths, here's a curated starting point: AI courses by job role. Align learning plans with the new HHS roles (data science, ML engineering, project management) to speed adoption and reduce rework.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide