HHS seeks industry input to fast-track clinical AI while safeguarding patients

HHS wants input from clinicians, vendors, and health systems on moving AI from pilots to safe clinical use. Its RFI seeks ideas on rules, payment, and research to protect patients.

Categorized in: AI News Healthcare
Published on: Dec 23, 2025
HHS seeks industry input to fast-track clinical AI while safeguarding patients

HHS seeks input on speeding AI adoption in clinical care

The Department of Health and Human Services is asking healthcare leaders, clinicians, and vendors how it can help move AI from back-office pilots into safe, effective clinical use. A new request for information (RFI) seeks ideas on regulation, payment, and research investments that improve outcomes, reduce burden, and lower costs-without putting patients at risk.

What HHS is asking

  • How digital health and software rules should evolve to include AI tools used in clinical care.
  • How payment and reimbursement can be simplified to encourage appropriate use of AI-enabled interventions.
  • What research, standards, and public-private partnerships would speed adoption and establish best practices.
  • What steps HHS can take to foster competition, improve access, and keep AI tools affordable.

Context you should know

The RFI comes as the administration has emphasized fewer federal barriers on AI to avoid slowing deployment, including an executive order this month challenging some state AI laws. That stance has also left healthcare with limited federal guardrails while the tech matures.

Risks are real: incorrect or misleading outputs, biased training data, and model drift over time can harm patients if not managed. That's why many health systems have focused AI on administrative work-revenue cycle, prior auth, and documentation-where clinical risk is lower. HHS now wants input on responsibly moving into direct clinical use.

Who is leading this

The RFI is issued by the Office of the Deputy Secretary, the Assistant Secretary for Technology Policy (ASTP), and the Office of the National Coordinator for Health Information Technology (ONC). HHS says it aims for a regulatory environment that is well understood, predictable, and proportionate to risk.

What to include in your comment

  • Clear definitions: clinical decision support vs. autonomous recommendations; software as a medical device vs. workflow tools.
  • Validation: pre-deployment testing on local data; external benchmarking; ongoing monitoring for drift and degradation.
  • Bias safeguards: dataset documentation, subgroup performance reporting, and corrective actions when disparities appear.
  • Human oversight: required clinician review thresholds, escalation paths, and fail-safes.
  • Transparency: model versioning, change logs, and performance summaries clinicians can understand.
  • Data privacy and security: alignment with HIPAA and 42 CFR Part 2; clear vendor responsibilities; breach protocols.
  • Procurement standards: evidence requirements, audit rights, uptime/SLA expectations, and support for local integration.

Payment levers HHS should consider

  • New or revised codes for AI-enabled services and care management where AI augments clinician work.
  • Outcome-based or episode-based payment models that reward measurable improvements attributed to AI tools.
  • Coverage with evidence development for higher-risk use cases pending real-world data.
  • Clarity on who bills (provider vs. facility), documentation requirements, and whether AI licensing costs are reimbursable.

Key risks HHS is watching

  • Incorrect or misleading AI outputs that influence clinical decisions.
  • Bias in training data that harms specific populations.
  • Performance degradation after deployment without active monitoring.

Who HHS wants to hear from

  • Developers building clinical AI tools.
  • Hospitals, practices, and purchasers implementing AI.
  • Clinicians and organizations that want AI but face access or affordability barriers.

How to participate

  • Comments are due 60 days after the RFI is published in the Federal Register on Dec. 23.
  • Watch the Federal Register for the listing and submission portal: federalregister.gov.
  • Track related policy updates and technical resources from ONC: healthit.gov.

What healthcare leaders can do now

  • Inventory current and planned AI tools; classify by clinical risk and required oversight.
  • Stand up an AI governance group with clinical, IT, legal, and equity leads.
  • Pilot in low-risk pathways with measurable outcomes, then expand based on data.
  • Set vendor requirements for evidence, bias testing, monitoring, and incident reporting.
  • Plan reimbursement: identify eligible codes, documentation needs, and total cost of ownership.

Skills and training

If you're building internal capability for safe AI adoption-clinical validation, bias testing, and workflow integration-consider structured upskilling for clinical and operations teams. Curated options by job role: Complete AI Training: Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide