Empathy First, Shared Decisions Next: Guiding Patients Through AI Health Advice

Patients are showing up with chatbot answers that challenge clinical plans. Lead with empathy, explain limits, and use a simple playbook to keep trust and care on track.

Categorized in: AI News Healthcare Management
Published on: Dec 17, 2025
Empathy First, Shared Decisions Next: Guiding Patients Through AI Health Advice

How Docs Can Manage Patients Bringing AI Medical Advice

Patients are showing up with chatbot answers that challenge clinical plans. This is now routine, not rare. As leaders, your job is to give teams a clear communication framework, reduce conflict, and keep care on track.

Recent research in a major otolaryngology journal recommends an empathy-first approach, followed by shared decision-making. That stance gives clinicians a way to acknowledge patient effort, clarify misinformation, and move forward without fracturing trust.

Why this matters for healthcare leaders

  • Use of large language model (LLM) chatbots for health questions is widespread. One report found roughly a third of patients asked tools like ChatGPT for healthcare advice weekly in 2025, with about one in ten doing so daily.
  • LLMs can sound highly confident even when they're wrong. That confidence can sway patients and collide with clinical judgment.
  • Without a standard approach, these visits run long, burn out staff, and risk inconsistent care.

What the research shows

Researchers from UC San Diego and UC Irvine reviewed why patients use AI, how often the tools miss the mark, and which communication techniques reduce friction. The core takeaway: lead with empathy, then explain context, limitations, and options.

In practice, that means validating why a patient looked for answers, explaining where public-facing AI differs from clinician-grade resources, and using the moment to engage in shared decision-making.

Conversation framework: empathy first, knowledge second

  • 1) Validate and connect. Recognize the patient's effort and concern. This lowers defensiveness and preserves rapport.
  • 2) Explain the mismatch. Outline why LLMs can conflict with clinical guidance: no full history, no exam, limited context on clinic resources, and safety considerations.
  • 3) Teach, don't preach. Briefly highlight where AI gets it right and where it often fails (outdated data, bias, confident tone on uncertain claims).
  • 4) Shift to shared decision-making. Lay out realistic options, risks, benefits, and next steps that fit the patient's goals and your setting.

Short scripts your team can use

  • Empathy opener: "I appreciate you bringing this in. It shows you're engaged and want the best outcome."
  • Context shift: "These tools don't have your full history or exam findings, so they can miss things or overstate a fit for certain tests or procedures."
  • Bridge to options: "Here's what the evidence supports for your situation. We can look at A, B, or C, and I'll explain the trade-offs so we decide together."

Operational steps for clinic managers

  • Standardize intake. Add a question: "Did you consult an AI tool or website for this issue?" Attach uploads or pasted text to the chart.
  • Create a quick-reference playbook. One-page guide with empathetic phrases, LLM limitation talking points, and a shared decision-making checklist.
  • EHR smart phrases. Templates for documenting AI-sourced advice, reconciliation with clinical findings, and the plan agreed with the patient.
  • Timeboxing and triage. Route complex AI-driven requests to longer appointment slots or follow-up calls with care coordinators.
  • Team training. Run brief role-plays during huddles. Focus on tone, de-escalation, and concise explanations.

Risk, quality, and documentation

  • Note the source. Capture the AI tool name, date accessed, and key claims the patient is relying on.
  • State the clinical rationale. Contrast AI advice with patient-specific findings and current guidelines.
  • Close the loop. Document that options, risks, and benefits were discussed and the plan was agreed upon.

What to explain about LLM limitations (briefly)

  • They don't have your full history, physical exam, or real-time lab/imaging context.
  • They may use outdated sources or mix high- and low-quality evidence.
  • They can sound certain even when the evidence is mixed or not applicable.

Patient education assets you can deploy

  • One-page handout: "How to use AI health tools wisely."
  • Portal message template: "Bring your questions. We'll review them together and apply them to your case."
  • FAQ item: "Why your clinician's recommendation may differ from a chatbot."

Metrics to watch

  • Visit length variance for AI-related encounters
  • Patient satisfaction (communication and trust items)
  • No-show and adherence rates after AI-related visits
  • Escalations or complaints tied to AI conflicts

Policy pointers for leadership

  • Scope: Clarify that AI can inform education but doesn't replace clinician judgment.
  • Guardrails: No direct copy/paste from public LLMs into charts. Verify against trusted clinical sources.
  • Governance: Assign oversight to your quality or clinical informatics committee. Review cases quarterly for trends.

Helpful references

Upskill your team

If you're building AI literacy for clinicians, educators, or care coordinators, consider structured learning paths to support safe use and patient communication. See curated options by role here: AI courses by job.

Bottom line

Lead with empathy, then clarify context, then decide together. With a simple playbook, staff training, and light-touch policy, your clinics can turn AI-sourced advice into a productive part of the visit instead of a detour.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide