HHS unveils AI strategy - what healthcare leaders need to know
The U.S. Department of Health and Human Services (HHS) has released a 20-page strategy to expand its use of artificial intelligence. The plan builds on the administration's push to normalize AI across federal work, while raising hard questions about protecting sensitive health information.
HHS frames this as a "first step" to boost efficiency and coordinate adoption across divisions. The document also hints at bigger moves: using AI to analyze patient data and accelerate drug development.
"For too long, our Department has been bogged down by bureaucracy and busy-work," Deputy HHS Secretary Jim O'Neill wrote. "It is time to tear down these barriers to progress and unite in our use of technology to Make America Healthy Again."
A 'try-first' culture and five pillars
The strategy encourages a "try-first" culture so staff can use AI to work faster and smarter. Earlier this year, HHS rolled out access to ChatGPT for every employee, signaling a department-wide push to normalize AI assistants for daily tasks.
HHS outlines five pillars for the program:
- Create a governance structure to manage risk across the department.
- Design a shared suite of AI resources for common use cases.
- Empower employees with tools, training, and clear usage policies.
- Fund programs that set standards for AI in research and development.
- Incorporate AI into public health operations and patient care workflows.
According to the plan, divisions are already working on AI that can "deliver personalized, context-aware health guidance to patients by securely accessing and interpreting their medical records in real time." Some in the Make America Healthy Again movement are wary of big tech involvement and data access deals that could expose personal information.
Speed vs. safety: experts press for details
Oren Etzioni said the ambition is encouraging but warned that speed cannot outpace safety: centralized data, fast deployment, and an AI-enabled workforce bring real risk when health data is involved. He noted that calls for "gold standard science," risk assessments, and transparency are promising on paper, but questioned whether those standards will be met under current leadership.
Darrell West of the Brookings Institution said the strategy mentions stronger risk management but lacks specifics. He flagged gaps around how sensitive medical information will be handled and how aggregated data will be protected when analyzed by AI. Done carefully, he said, this could become a high-performing model for modern government operations.
HHS has faced criticism before for pushing legal boundaries on data sharing, including handing Medicaid recipients' health data to Immigration and Customs Enforcement. That history raises the stakes for trust, consent, and auditability in any new AI rollout.
What this means for providers, payers, and public health teams
- Map your AI footprint. Inventory every AI tool in use or planned, from ambient scribing to prior auth triage. Separate pilots from production and assign owners.
- Tighten data governance. Align processes with HIPAA requirements for privacy and security. Reference current guidance from HHS OCR on the HIPAA Privacy Rule (official resource).
- Define risk tiers. Set review gates for models touching PHI, including validation, monitoring, incident response, and human-in-the-loop checkpoints.
- Clarify consent and transparency. Document how data is used, minimization practices, retention limits, and how patients can opt out where applicable.
- Strengthen vendor due diligence. Require clear terms on data use, de-identification, fine-tuning, model provenance, logging, and security certifications.
- Train your workforce. Cover prompt hygiene, PHI handling, bias detection, accuracy checks, and handoff to clinicians. If you're building AI literacy across teams, see practical options by job role (training resources).
- Measure outcomes. Track error rates, turnaround times, patient experience, and cost-to-serve. Retire tools that don't clear your safety and quality bar.
Key numbers to watch
- HHS reported 271 active or planned AI implementations in FY 2024.
- The department projects a 70% increase in 2025.
- Chatbots and AI assistants are encouraged across the federal workforce, with HHS acting as a major testbed.
Bottom line: HHS is moving fast on AI. For healthcare organizations, this is the moment to lock down governance, document clinical oversight, and set clear patient safeguards-before scaling anything that touches care delivery or PHI.
Your membership also unlocks: