HHS requests input to scale clinical AI nationwide: What healthcare leaders should know
The U.S. Department of Health and Human Services has issued a request for information to help shape safe, useful and cost-effective adoption of AI across clinical care. The goal: improve outcomes and provider experience while lowering costs for patients and government programs.
Responses are due January 19, 2026. If you build, buy, regulate or use AI in care delivery, your input is on the critical path.
Why it matters
HHS is gathering practical advice to guide policy, funding and technical standards that can support AI at clinical scale. The RFI focuses on what it will take to deploy AI responsibly across real workflows and patient populations.
- How to structure digital health and AI software change rules that keep patients safe as models and data shift over time.
- How reimbursement should support provider use of AI that reduces costs and improves quality.
- How to increase interoperability for HIPAA-protected data so AI tools can run reliably across systems and settings.
- Where to invest in R&D to strengthen implementation, machine learning best practices and use in complex, high-acuity scenarios.
"Artificial intelligence will be a transformative force for good across America," said Jim O'Neill, HHS Deputy Secretary. "We want to hear from you. Our efforts to accelerate AI adoption must be guided by the real needs and experiences of those developing these tools and delivering care."
Data liquidity and trust
Success hinges on secure movement of patient data across EHRs, payers and point-of-care systems. That includes the right privacy guardrails, clear consent patterns and reliable audit trails.
"Data liquidity and the trust patients and providers have in how data moves are essential," said Dr. Thomas Keane, Assistant Secretary for Technology Policy and National Coordinator for Health IT. Interoperability rules and infrastructure are central to the effort to "Make America Healthy Again" and reduce total system costs.
For context, see HHS' AI hub and privacy resources: HHS Artificial Intelligence and HIPAA.
What to include in your response
- Safety and change control: Post-deployment monitoring, drift detection, update approvals, rollback plans and clinical escalation paths.
- Payment models: Clear CPT/HCPCS pathways, shared-savings approaches, and quality metrics that credit AI-enabled workflows (e.g., reduced readmissions, faster throughput).
- Bias, fairness, and equity: Testing across demographics, documentation of known limitations and requirements to show comparable performance for high-risk subgroups.
- Human-in-the-loop: Role clarity for clinicians, override authority, explainability expectations and liability boundaries.
- Data interoperability: Priority standards (FHIR, USCDI), consent management, de-identification practices and secure API patterns for cross-vendor use.
- Operational fit: EHR integration, ambient documentation, prior authorization support, sepsis/ICU monitoring, imaging triage and discharge optimization.
- Validation and reporting: Pre-market and post-market evidence, real-world performance benchmarks, and transparent model cards.
- R&D priorities: High-acuity decision support, multi-modal models, simulation testbeds, and datasets with governance that protect privacy while enabling research.
- Workforce development: Practical training for clinicians, data teams and compliance to use and oversee AI safely.
Timeline and next steps
Responses due: January 19, 2026. HHS says feedback will inform coordinated actions across all divisions, from policy updates to funding and technical guidance.
Review the agency's AI priorities and prepare evidence-backed comments, including case studies, measured outcomes and implementation lessons learned. Point to barriers you've faced and the exact rules or investments that would remove them.
The larger trend
Earlier this year, HHS released an Artificial Intelligence Strategic Plan built around four goals: ignite health AI innovation, promote trustworthy development, democratize access and build an AI-skilled workforce. Agencies are also integrating AI across internal operations, research and public health.
The department is investing in a new AI platform to improve data quality and governance by unifying large, complex IT systems and automating administrative workflows. "HHS is taking a major step toward a modern, AI-ready architecture for national health data," said Stephen Ehikian, C3 AI's CEO.
What this means for providers and health systems
- Pick 2-3 high-yield use cases: Ambient scribing, triage/acuity alerts, imaging prioritization, rev cycle automation, prior auth.
- Build lightweight governance: An AI review committee, risk tiers, approval checklists and monitoring dashboards.
- Measure outcomes: Safety events, false positives/negatives, clinician time saved, throughput, denials and patient satisfaction.
- Tighten data plumbing: FHIR APIs, identity matching and consent tracking to support model inputs and auditability.
- Train your teams: Short, role-based training so clinicians and staff understand capabilities, limits and escalation paths.
Upskilling resources
If you are building AI skills across clinical, quality or operations teams, explore job-focused learning paths and certifications: AI courses by job and popular AI certifications.
Bottom line: HHS wants actionable input grounded in real clinical operations. Share what works, what fails and what support you need to deploy AI safely at scale.
Your membership also unlocks: