Shadow AI is spreading in healthcare - and it's riskier than it looks
Shadow AI use is common inside hospitals and health systems, according to a recent Wolters Kluwer Health survey. More than 40% of respondents said they know colleagues using unapproved AI tools, and nearly 20% admitted they've done it themselves.
That gap between what's helpful for one person and what's safe for the organization is the problem. As Wolters Kluwer's chief medical officer Dr. Peter Bonis put it: "What is their safety? What is their efficacy, and what are the risks associated with that? And are those adequately recognized by the users themselves?"
Why people are doing it
- Faster work: Over 50% of administrators and 45% of providers used unapproved tools for speed.
- Better features or no approved option: Nearly 40% of admins and 27% of providers cited functionality gaps or lack of sanctioned tools.
- Curiosity: More than 25% of providers and 10% of admins tried tools to experiment and learn.
The risks that show up at the bedside
Accuracy remains the biggest clinical concern. AI can produce confident but wrong answers, and those mistakes aren't always caught before they reach the patient. About a quarter of respondents named patient safety as their top worry.
There's also security. Unapproved tools increase exposure to cyberattacks and data leaks. Healthcare is a prime target, and one stray prompt could include identifiers or sensitive notes.
For reference, see the NIST AI Risk Management Framework and HHS' HIPAA Security Rule guidance here.
Policy awareness is thin
Policy isn't reaching the front line. Only 29% of providers say they know the main AI policies at their organization, compared with 17% of administrators.
Many clinicians have seen guidance around AI scribes, which are widely deployed. But that doesn't mean they know the full scope of what's allowed, what's banned, and how data should be handled across other tools.
Governance that actually reduces shadow AI
- Publish an approved tool list with use cases, data boundaries, and PHI rules. Keep it short and living.
- Create a low-friction "AI sandbox" where staff can test tools with de-identified data and guardrails.
- Draft a plain-language AI policy: what's permitted, what's restricted, what needs review. Two pages, max.
- Define PHI-safe prompting. Ban pasting identifiers into public models; require de-identification or synthetic data.
- Stand up a fast-track review for high-demand tools. If users wait months, they'll go around you.
- Require vendor security checks, BAAs where needed, and clear data retention/usage terms.
- Set up audit trails: who used what, for which task, and with which data classification.
- Train by role: clinicians, admins, IT, rev cycle. Use short scenarios: "approved," "ask," or "avoid."
- Offer safe alternatives (e.g., a sanctioned note-drafting or search assistant) so people don't default to unsanctioned tools.
- Establish a confidential reporting path for new tools staff want - and close the loop fast.
- Measure outcomes: time saved, error rates, rework, incident count. Keep what works; retire what doesn't.
What good looks like
Your clinicians get quick access to a small set of vetted tools that clearly state what data they touch. Your admins have a simple policy and a fast process to approve new use cases.
Security knows where AI is used and by whom. Leaders see measurable gains in documentation time, information retrieval, or coding support - without an uptick in risk or near misses.
Bottom line
Shadow AI isn't a user problem; it's a supply and clarity problem. Give people safe, useful options and a fast way to request more - and the shadow goes away.
If your teams need practical upskilling, consider concise courses and policy-aware training for healthcare roles: Courses by job.
Your membership also unlocks: