Healthcare's Shadow AI Problem Is Growing-And Security Teams Can't Stop It
Physicians and clinicians are using unsanctioned AI tools to manage overwhelming workloads, introducing security risks that hospitals cannot monitor or control. Healthcare organizations must accept this reality and focus on containment rather than prohibition.
Medical professionals adopt these tools to save time on dosing calculations, clinical documentation, billing tasks, and information retrieval. When they work faster, they spend more time with patients. That pressure is real, and security teams cannot reverse it through policy alone.
The problem: these tools operate outside hospital networks and security oversight. When clinicians use personal devices, unvetted applications, or public large language models, they expand attack surfaces and risk exposing protected health information to unmanaged environments. A ransomware attack hitting a hospital already dealing with shadow AI becomes harder to investigate and recover from.
The Visibility Gap Creates Unlimited Risk
Shadow AI creates two distinct problems, according to security leaders. First, it hides assets from security teams entirely. Second, it deploys workloads with broad system privileges-particularly AI agents-creating what amounts to unlimited blast radius if compromised.
A Wolters Kluwer report found that 41% of healthcare workers knew colleagues were using unauthorized AI tools. Nearly 50% said they turned to shadow AI for faster workflows. One in three cited a lack of approved alternatives or tools that didn't meet their needs.
Vendors are making this worse. They now market directly to physicians at conferences, offering agreements that bypass hospital governance. Those agreements typically shift all liability to individual doctors-a tempting offer when the alternative is administrative burden.
Prohibition Doesn't Work
Security leaders acknowledge the obvious: people will not stop using shadow AI. Asking how to prevent it is "a losing question," according to Doug Merritt, CEO of Aviatrix. The tools are too productive and too easy to access.
Instead, organizations should shift strategy. Accept that shadow AI exists. Assume it's running in your environment right now. Focus on discovery, visibility, and containment.
This means implementing zero-trust policies for AI workloads-knowing exactly who they communicate with and what data leaves the network. It means creating an enterprise AI plan with approved tools that actually meet clinician needs. It means working with vendors who can deploy proper security and privacy controls.
Awareness and Governance Replace Denial
Healthcare leaders presenting at RSAC 2026 emphasized that raising awareness among staff matters more than enforcement. Clinicians aren't trying to be evasive. They're trying to do their jobs better.
Organizations should discuss workload challenges directly with clinicians and find ways to ease them through approved channels. When new tools are introduced, patients should have the option to opt out.
The healthcare industry holds the most sensitive data of any sector. That reality demands better security posture, not stricter prohibitions. The difference between those two approaches determines whether shadow AI becomes a manageable risk or a critical vulnerability.
Learn more about AI for Healthcare and Generative AI and LLM tools and their security implications.
Your membership also unlocks: