AI in Healthcare: Dr. Rochelle Walensky on Promise, Pitfalls, and the Work We Need to Do
At Future of Health's invitation-only summit in Los Angeles, former CDC director Dr. Rochelle Walensky offered a clear-eyed view of AI in care delivery. She sees AI filling real gaps-especially where clinicians are scarce-while warning against piling unrealistic efficiency demands onto the people doing the work. Her message was blunt: rethink training, protect clinical judgment, and keep the patient at the center-even if it trims margin.
Where AI is delivering value today
- Triage and guidance in regions with too few clinicians.
- Documentation support and summarization that reduces clerical overhead.
- Decision support for differential diagnoses that surfaces missed considerations.
- Telehealth that expanded access during the pandemic, though some gains have faded due to policy shifts.
- Public health signals: wastewater surveillance, crowdsourced temperatures (e.g., Kinsa), and AI review of chest X-rays for tuberculosis.
- Outbreak detection using nontraditional data, from cooling tower monitoring for Legionella to signals like restaurant reservations-people tend not to dine out when sick.
The clinician workload trap
AI can speed charting, but that doesn't mean clinicians should be told to cram more visits into the same hour. If every AI-enabled workflow still demands a human to synthesize "a gazillion data points" and sign off, we've just moved the bottleneck. That cognitive load is not sustainable.
- Protect time for judgment, nuance, and patient connection; don't convert every minute saved into more throughput.
- Tune AI to remove clicks and fragmentation, not to dictate care.
- Set panel and productivity targets based on outcomes and safety, not just volume.
- Measure whether clinicians feel their work is easier and patients feel heard-then adjust.
Bias hasn't gone away
Walensky is "very concerned" about bias and notes the conversation has gone quiet. Underserved groups can be misrepresented in training data, which leads systems to underestimate need. That is the opposite of equity.
- Demand evidence of performance across demographics and settings before deployment.
- Monitor disagreements between clinicians and AI; study who is right, when, and why.
- Track false positives/negatives by subgroup and publish results internally.
- Stand up a human override by default and make it simple to report model issues.
For a solid foundation, review global guidance on AI ethics for health from the WHO: Ethics and governance of AI for health.
Rethinking medical training
If AI handles chart reviews, summaries, and routine documentation, training must shift. What do humans do best, and how do we develop those skills on purpose?
- Diagnostic reasoning that integrates context beyond the dataset.
- Communication: empathy, expectations setting, and shared decision-making.
- AI literacy: how systems work, where they fail, and how to verify outputs.
- Data basics: bias, drift, and feedback loops in clinical settings.
- Workflow design and quality improvement with AI in the loop.
- Ethics, safety, and accountability in augmented care.
Practical guardrails for health leaders
- Start with narrow use cases where benefit is clear and risk is low.
- Set success metrics up front: clinician time saved, patient outcomes, equity impact, and cost.
- Run A/B pilots with opt-in clinicians; collect qualitative and quantitative feedback.
- Integrate into the EHR to reduce toggling; eliminate duplicate documentation.
- Establish model monitoring: performance, drift, safety events, and bias checks.
- Create a cross-functional governance group with clinical, legal, data, and patient voices.
- Give clinicians a safe "off switch" without penalty.
Policy still matters
Telehealth proved its value but lost momentum as policies shifted. Keep what worked during the pandemic and align incentives with outcomes, not clicks. Public health tools like wastewater surveillance deserve stable funding and clear standards.
- Preserve telehealth flexibilities that expand access and continuity.
- Support wastewater surveillance as an early warning system (see CDC's program: National Wastewater Surveillance System).
- Enable privacy-preserving data use for population health signals from nontraditional sources.
- Fund community-based deployments that reach underserved groups first.
Keep the patient first-even if it trims margin
There's plenty of money in healthcare, but profit cannot outrank patients. If AI helps people get better care-even with slightly lower margin-the system still wins. Do that consistently, and value will follow.
Next steps for clinical teams
- Map two workflows that drain time today; pilot AI that removes clicks, not judgment.
- Define "must-have human decisions" in those workflows and make them explicit.
- Stand up a simple bias and safety review cadence; publish findings to staff.
- Invest in AI literacy for your team; make verification a shared habit.
If you're building AI skills across roles, this curated directory can help you find practical courses by job function: Complete AI Training - Courses by Job.
Your membership also unlocks: