Shadow AI is spreading in healthcare, speeding work but raising patient safety and privacy risks

Shadow AI is common in healthcare, risking patient safety, privacy, and accountability. Cut exposure with clear policy, a safe AI workspace, guardrails, logging, and training.

Categorized in: AI News Healthcare
Published on: Jan 28, 2026
Shadow AI is spreading in healthcare, speeding work but raising patient safety and privacy risks

Shadow AI is widespread in healthcare. Here's how to reduce the risk

Healthcare staff are using AI tools their organizations haven't approved. A recent Wolters Kluwer Health survey found more than 40% of workers know colleagues who use unapproved AI, and nearly 20% admit they've done it themselves. That's a direct risk to patient safety, privacy, and your security posture.

As Wolters Kluwer's chief medical officer Dr. Peter Bonis put it, "What is their safety? What is their efficacy, and what are the risks… and are those adequately recognized by the users themselves?" In short: useful doesn't mean safe.

Why staff reach for unapproved AI

  • Faster workflow: More than 50% of administrators and 45% of providers say speed is the reason.
  • Better functionality or gaps in approved tools: Nearly 40% of administrators and 27% of providers.
  • Curiosity and experimentation: Over 25% of providers and 10% of administrators.

There's also a policy gap. Only 29% of providers say they're aware of their organization's main AI policies. Among administrators, it's 17%.

Where the risk shows up

  • Patient safety: AI can produce inaccurate or misleading content that slips past busy clinicians. About a quarter of respondents ranked safety as their top concern.
  • Privacy and security: Unvetted apps can expose PHI, enable data exfiltration, and expand your attack surface without IT visibility.
  • Accountability: If a tool influences care, who's responsible for errors, documentation, and audit trails?

What to implement in the next 90 days

  • Publish a one-page AI policy: Plain language. What's approved, what's banned (e.g., no PHI in public tools), and when to escalate.
  • Stand up a safe alternative: Provide a sanctioned AI workspace (enterprise accounts or an internal portal) with logging, DLP, and PHI safeguards.
  • Create a fast-track intake: 2-week review path for new AI use cases so staff don't bypass IT to get work done.
  • Set role-based rules: Separate administrative, education, and clinical decision use. Higher scrutiny as you get closer to care decisions.
  • Standard prompts and red lines: Offer approved templates for summarization, coding support, and patient education; ban clinical diagnosis or dosing from general models.
  • Consent and signage for AI scribes: Make it clear when a recorder is active; define storage, retention, and sharing rules.
  • Logging and audit: Capture who used what, when, and whether PHI was present. Review high-risk activity weekly.
  • Staff training: Short, scenario-based sessions for providers and admins. Include examples of safe vs. unsafe inputs.

Clinical safety guardrails

  • Human-in-the-loop: AI outputs never stand alone; require clinician review and signoff.
  • Source transparency: Tools that reference guidelines and literature are preferable to black boxes for point-of-care use.
  • Evaluation before rollout: Test on local cases, set accuracy thresholds, and define failure modes and handoffs.
  • Ongoing monitoring: Re-check performance after model updates, especially for CDS, triage, or discharge instructions.

Security essentials for AI

  • Data controls: Enforce DLP, PHI redaction, and encryption at rest and in transit.
  • Access management: SSO, least privilege, and project-level isolation for prompts and outputs.
  • Third-party risk: Vendor BAAs, model hosting location, retention defaults, and breach notification terms.
  • Incident playbook: Clear steps for suspected PHI leakage or harmful outputs, including patient communication when needed.

What good looks like

  • An approved AI toolbox that is as easy to use as the shadow tools it replaces.
  • Clear rules: staff know when AI is allowed, prohibited, and how to request exceptions.
  • Measurable benefits in admin time saved, with zero tolerance for PHI exposure and unsafe clinical use.
  • Routine audits and re-education, just like other safety-critical processes.

Helpful resources

If your team needs AI upskilling

Provide structured training so staff stop guessing and start using approved workflows. You can explore role-based courses here: Complete AI Training: Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide