AI disclosure in healthcare: What patients must know
AI now assists with imaging reads, clinical decision support, patient messaging, and back office workflows. With billions lacking essential care and a growing workforce gap, its role will keep expanding. That puts a simple question in front of every healthcare leader: should patients be told when AI plays a role in their care?
In the U.S., there's no single federal rule that requires broad AI disclosure in healthcare. States are stepping in with their own laws, and the details vary. What's consistent across the board: transparency is a trust issue, not a technical footnote.
Why disclosure matters
Patients expect to be informed when technology influences decisions about diagnosis, treatment, coverage, or communication. Hidden AI use erodes trust fast, even if the outcome is accurate.
HIPAA doesn't directly regulate AI, but its core principles still apply. Covered entities must explain how protected health information is used and safeguarded. If AI analyzes or generates clinical insights, nondisclosure can undermine informed consent and patient understanding. See HIPAA guidance from HHS here.
Where disclosure is usually required or expected
- Patient-facing clinical communications (e.g., messages drafted with AI)
- Utilization review and utilization management
- Claims processing and coverage determinations
- Mental health or therapeutic interactions (including chatbots)
These are high-impact because they directly affect access to care and how people interpret their health information.
State action: quick snapshot
- California AB 3030: Clinics and physician offices that use generative AI for patient communications must include a clear disclaimer and provide a path to reach a human clinician.
- California SB 1120: Applies to health plans and disability insurers. Requires safeguards when AI supports utilization review, mandates disclosure, and confirms licensed professionals make medical necessity decisions.
- Colorado SB24-205: Covers "high-risk" AI that can influence approval or denial of services. Requires safeguards against algorithmic discrimination and disclosure of AI use.
- Utah HB 452: Mental health chatbots must clearly disclose AI use.
- Utah SB 149 and SB 226: Extend disclosure requirements to regulated occupations, including healthcare professionals, supporting transparency in therapeutic and clinical services.
- Also moving: Massachusetts, Rhode Island, Tennessee, and New York are considering or enforcing rules that require disclosure and human review when AI influences utilization review or claims outcomes.
Healthcare depends on trust
Patients share sensitive information and follow care plans when they believe decisions are ethical and accountable. Clear disclosure reinforces that licensed professionals remain responsible for clinical decisions. Done well, it improves engagement and reduces confusion about how data is used.
For context on global access pressures AI aims to relieve, see the World Economic Forum's work on essential care gaps here.
A practical playbook for disclosure
- Map AI touchpoints: List where AI influences patient communications, clinical decisions, utilization review, claims, and mental health tools.
- Define "meaningful use" thresholds: Document when AI materially influences a decision or message and triggers disclosure.
- Write plain-language notices: Say where AI is used, why it's used, what data informs it, how to reach a human, and how to opt out where allowed.
- Human-in-the-loop: Require licensed professionals to make or validate medical decisions; set a fast escalation path to a human for patient-facing tools.
- Staff training: Provide quick scripts for front desk, nurses, and care managers to answer "How was AI used here?"
- Patient materials: Update the Notice of Privacy Practices, consent forms, portals, IVR prompts, and chatbot UIs.
- Vendor controls: Bake disclosure, audit logs, appeal workflows, and bias testing into contracts and BAAs.
- Fairness checks: Test for differential impacts across demographics; document findings and mitigations.
- Logging and QA: Track where AI is used, overrides by clinicians, patient opt-outs, and complaints; review monthly.
- Incident response: Define how you correct AI-related errors, notify affected patients, and prevent repeats.
What good disclosure looks like
Clinical message: "This message was drafted with the help of AI and reviewed by your care team. Contact us to speak with a clinician."
Utilization review or coverage notice: "An AI tool assisted our initial review. A licensed clinician made the final decision. You can request human review and appeal at the link provided."
Risks of skipping disclosure
- Higher litigation and regulatory exposure
- Reputational damage and patient churn
- Lower clinician trust in tools and more rework
- Ethical concerns around autonomy and transparency
What patients will start to see
- Simple AI disclaimers in messages, portals, and coverage letters
- Clear ways to reach a human and request review
- Stronger audit trails and clearer accountability
Bottom line for healthcare teams
AI can improve efficiency, expand access, and support clinicians, but its value rests on trust. Disclosure doesn't slow progress-it builds confidence in the tools and the professionals using them. Start with transparency, train your teams, and keep a human in charge of clinical judgment.
If you're standing up AI programs and need practical training for staff, explore role-based options at Complete AI Training.
Your membership also unlocks: