Shadow AI Is Spreading in Health Care, and Patients Want Opt-In and Accountability

Shadow AI is popping up in hospitals, and patients are uneasy. Make it safe and accountable-clear rules, disclosure, opt-in, and real oversight-and trust will climb.

Published on: Feb 24, 2026
Shadow AI Is Spreading in Health Care, and Patients Want Opt-In and Accountability

Shadow AI Is Already in Hospitals. Patients Are Wary. Here's How to Close the Trust Gap

Two national surveys send a clear message. Inside health systems, 57% of professionals have bumped into or used unauthorized AI tools-shadow AI. Outside, 93% of patients report at least one concern about health AI, and 51% say it makes them trust care less. Yet more than 80% say that trust could increase with clear accountability. That's the signal: people don't hate AI-they hate uncertainty.

Shadow AI: Why it's happening and why it matters

Shadow AI is any AI tool used without IT approval or oversight. Think chatbots, data visualization apps, or custom models built to spot patterns. It often starts with good intent-speed and convenience.

In one survey, 40% of respondents had encountered an unauthorized AI tool, and 17% admitted using one. Reasons: faster workflow (about half), lack of approved options or missing features (about one in three), and 26% cited simple curiosity. The risk is real: the report cites an average health care breach cost of more than $7.4 million in 2025.

There's also a communication gap. Among administrators, 42% strongly agreed AI policies are clearly communicated. Only 30% of providers felt the same. If the people closest to patients aren't clear on the rules, shadow AI grows.

Patients are using AI-but trust is thin

Three-quarters of surveyed patients say they use AI intentionally or unintentionally, yet only 13% feel very comfortable with it. Across use cases-note writing, diagnosis, treatment suggestions-77% to 85% want providers to disclose when AI is used. About 60% want providers to ask permission first.

Consent preferences were strongest among younger, lower-income, and less-educated patients, and among Black and Hispanic respondents (around 78% opt-in preference). Top concerns: clinicians overrelying on AI, not enough human oversight, and miscommunication or errors in care. Over half (55%) worry AI may treat some groups unfairly.

Transparency alone doesn't boost trust in every context. If AI helps insurers decide coverage, 62% say they would trust their care less. People were more positive about AI matching patients to clinical trials, with 17% reporting increased trust. Data control matters too: over 70% say the patient alone should own and control their data, 64% are open to its use to improve care, and 63% worry about data being sold or shared for profit.

The common thread: accountability drives adoption

More than 80% of patients say clear accountability would increase trust. That means documented oversight, safety checks, and visible consequences when things go wrong. For staff, it means approved tools that actually work and guidance that's easy to follow. For patients, it means disclosure, opt-in choices, and recourse if harm occurs.

Action plan by role

  • Health system leaders and IT
    • Publish an approved AI tool catalog and a simple, fast request process for new tools.
    • Roll out systemwide AI policies with role-based training; measure comprehension quarterly.
    • Stand up AI governance: risk register, model inventory, audit logs, and human-in-the-loop checkpoints for high-stakes use.
    • Protect data: PHI boundaries, data minimization, de-identification pipelines, vendor risk reviews, and signed BAAs where required.
    • Enable safe experimentation: secure sandboxes, redaction, and monitored access for pilots.
    • Plan for failure: incident response for AI errors, rollback options, and regular post-mortems.
  • Clinicians and care teams
    • Use approved tools only; request better ones when gaps exist.
    • Disclose AI use in care and prefer opt-in consent for diagnosis, mental health, and insurance-linked cases.
    • Keep human oversight non-negotiable; validate AI outputs against clinical judgment and guidelines.
    • Report safety events tied to AI quickly and document decisions in the record.
  • Developers and data scientists
    • Build with guardrails: PII handling, data provenance, versioning, and comprehensive logging.
    • Test for bias, hallucinations, and drift; use red-teaming and scenario-based validation.
    • Provide explainability or clear rationale where feasible; define safe fallbacks and escalation paths.
    • Prefer privacy-preserving architectures (on-prem, VPC, de-ID) and update models on a defined schedule.
  • Patient experience and communications
    • Create plain-language disclosures and consent flows; mirror them in portals and after-visit summaries.
    • Publish a public-facing "Where we use AI" page and how to opt out where possible.
    • Offer easy channels for questions and complaints; close the loop with visible resolutions.

What this means for adoption

If your approved tools are slow or limited, shadow AI will fill the gap. If patients don't get a say, trust will drop. The path forward is simple, not easy: make safe tools better than the alternatives and make accountability visible.

For deeper implementation guidance, see AI Learning Path for IT Managers and explore clinical use cases at AI for Healthcare.

Sources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)