Health systems need proactive governance and lifecycle risk management to safely deploy clinical AI, Duke-Margolis paper argues

Most hospitals lack formal safety protocols for clinical AI tools, a March 2026 white paper found. Risks include bias, hallucinations, and clinician overreliance-often invisible to patients and poorly understood by staff.

Categorized in: AI News Management
Published on: Mar 27, 2026
Health systems need proactive governance and lifecycle risk management to safely deploy clinical AI, Duke-Margolis paper argues

Health Systems Lack Safety Frameworks for AI Tools, White Paper Finds

Clinical AI is spreading through hospitals and health systems faster than safety protocols can keep pace. A white paper published in March 2026 found that existing patient safety frameworks don't adequately detect or manage the risks these tools introduce-including bias, performance drift, hallucinations in language models, and clinician overreliance.

The gap creates real problems. AI systems often operate invisibly to patients and are inconsistently understood by the clinicians who use them, making risks difficult to identify until they affect care.

Proactive Management Required

The paper argues health systems must shift from reactive to proactive oversight. This means establishing formal governance structures with clear accountability, maintaining centralized inventories of which AI tools are in use, and integrating AI monitoring into existing patient safety reporting systems.

Risk management must span the entire lifecycle of an AI tool-from procurement through deployment and retirement.

Current State: Fragmented and Unequal

No widely adopted standards for AI safety in clinical practice exist today. Regulatory oversight remains fragmented across agencies. Many health systems lack the technical expertise and infrastructure to monitor AI performance effectively.

This creates an "AI divide" between well-resourced organizations and under-resourced ones, risking unequal care quality across institutions.

What Needs to Change

The paper calls for three categories of action:

  • Health systems should strengthen cross-system learning, collaborate more closely with AI vendors, and improve tracking of both safety events and near misses
  • Policymakers should clarify regulatory expectations, create incentives for safety infrastructure, and enable information sharing across institutions
  • Organizations need to build governance and monitoring systems that ensure AI improves patient outcomes without compromising safety

The authors acknowledge that building this infrastructure requires resources and expertise many systems don't currently possess. The question facing health system leaders is whether to invest now or manage crises later.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)