Shadow AI use by healthcare staff puts patient safety and data security at risk

57% of healthcare workers use unauthorized AI tools at their organizations, creating security and patient safety gaps that leaders say they can't ignore. Healthcare data breaches averaged $7.42M in 2025.

Categorized in: AI News Healthcare
Published on: May 10, 2026
Shadow AI use by healthcare staff puts patient safety and data security at risk

Healthcare organizations lose control as staff deploy unsanctioned AI tools

Fifty-seven percent of healthcare workers have used unauthorized AI tools at their organizations, according to a survey of 518 hospital and health system staff. The widespread adoption of shadow AI-tools deployed without official approval-creates gaps in security oversight and patient safety that healthcare leaders say they cannot ignore.

The pressure driving this behavior is real. Primary care physicians need 26.7 hours per day to deliver guideline-recommended care, according to research in the Journal of General Internal Medicine. Healthcare systems are understaffed, underfunded, and drowning in administrative work. Staff reach for any tool available to get the job done.

Healthcare has adopted AI at more than twice the rate of other industries, making it both a survival strategy and a competitive necessity. But speed has outpaced governance. Most organizations lack the policies and access controls to manage what employees are actually using.

The financial and safety stakes are high

A 2025 IBM study found that 97% of organizations that experienced an AI-related security incident lacked proper access controls. Sixty-three percent had no AI governance policies at all.

The cost of a healthcare data breach averaged $7.42 million in 2025, and healthcare breaches take longer to identify and contain than those in other sectors. When a breach occurs, patient privacy is at immediate risk.

In surveys, both providers and administrators ranked patient safety, privacy, and data breaches as their top three concerns with AI. Generic tools-especially chatbots and generative AI systems-pose particular risks when used for clinical decisions or embedded in patient data applications.

Generic tools bring specific dangers to clinical care

Unsanctioned generative AI can produce hallucinations, inconsistencies, and biases. These errors matter when the tool influences patient care decisions or accesses sensitive information.

Even de-identified patient data can be re-identified by some tools, potentially linking information back to individuals and creating HIPAA violations. If a tool pulls information from broad sources without evidence grounding, clinicians lose transparency about how it reached a recommendation.

A resident at a hospital flagged algorithmic bias as a core concern: AI systems trained on datasets that underrepresent elderly patients, racial minorities, or other groups may produce less accurate recommendations for those populations. An IT executive at a health system said their biggest worry is ensuring AI tools don't "compromise patient safety, privacy, or regulatory compliance."

The path forward requires understanding, not restriction

Healthcare leaders cannot simply ban unsanctioned tools. They need to understand why staff are using them and what problems employees are trying to solve. The answer often reveals legitimate workflow gaps.

Organizations should establish enterprise-wide AI guidelines, communicate policies clearly to staff, and identify approved tools that meet the same efficiency needs safely. Purpose-built generative AI systems trained on expert-validated evidence and transparent about sources offer an alternative to generic tools.

In 2026, healthcare leaders will need to implement formalized, organization-wide AI governance frameworks that include proper training and guardrails. The goal is not to slow innovation but to align it with patient safety and compliance requirements.

Learn more about AI for Healthcare and the specific risks of Generative AI and LLM systems in clinical settings.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)