The Misuse of AI Chatbots in Healthcare: Risks, Realities, and Responsible Innovation
Date and time: January 28, 2026 | 12:00 p.m. ET
AI chatbots and other large language models are being pulled into clinical conversations they were never built to handle. Clinicians, staff, and patients are asking tools like ChatGPT, Claude, Copilot, Gemini, and Grok for guidance on conditions, treatments, device use, and even what supplies to buy.
Answers are quick and can seem helpful. They can also be wrong-sometimes dangerously wrong. This session zeroes in on when these tools help, when they harm, and how to build guardrails that keep care safe.
What you will learn
- Limits of LLMs that make answers unreliable (hallucinations, gaps in clinical context, prompt sensitivity).
- Risks from using chatbots for patient-care tasks: misinformation, overreliance on unvetted tools, poor delegation of judgment, privacy exposure, workflow friction, and loss of trust.
- Real failure cases and the technical, ethical, and human factors behind them.
- Clear lines between appropriate and inappropriate use in healthcare settings.
- Safeguards and governance strategies to curb misuse and support responsible adoption.
- Regulatory and ethical considerations that should guide integration into practice.
Why this matters
General-purpose chatbots are not medical devices, not cleared for clinical decisions, and not trained on your local policies or patient records. Yet they are influencing choices at the point of care.
Without controls, these tools can produce convincing nonsense, leak sensitive data, or introduce friction that slows teams down. With the right boundaries, they can still reduce grunt work and support education-without stepping into diagnosis or treatment calls.
Appropriate vs. inappropriate use
- Appropriate: Drafting patient education materials for clinician review; summarizing non-PHI policies; generating checklists or SOP outlines; brainstorming questions for vendor demos; converting approved content into different formats (e.g., plain language) with human sign-off.
- Inappropriate: Diagnosis, triage, or treatment recommendations; device setup or dosing guidance without validated guardrails; interpreting patient-specific data; answering messages with PHI in public models; replacing clinical or professional judgment.
Practical risks to watch
- Misinformation at scale: Fluent but incorrect answers that slip past busy clinicians.
- Privacy leaks: PHI entered into public models may be logged or exposed. Limit prompts and use approved, enterprise deployments.
- Overconfidence: High-quality tone creates false certainty. Require citations and approvals for clinical-facing content.
- Workflow disruption: Copy-paste into EHRs, mismatched formats, and unclear ownership can add rework.
- Bias and fairness: Output may vary by language, demographics, or rare conditions. Monitor and audit regularly.
Safeguards that work
- Define boundaries: A written policy on allowed vs. disallowed use cases, with examples and "red lines."
- Human-in-the-loop: Require expert review for any content that reaches patients or clinicians.
- Data protection: Block PHI in public models; prefer enterprise solutions with logging, access controls, and retention limits.
- Validated knowledge sources: Use retrieval from vetted, versioned content; track citations and last review dates.
- Audit and monitoring: Log prompts/outputs, sample for errors, measure incident rates, and feed back fixes.
- Training and onboarding: Teach limits, prompt hygiene, and escalation paths; certify users before access.
- Incident response: Set up a lightweight safety review board and a fast path to pause risky use.
Regulation and ethics: what to expect
General-purpose chatbots typically are not cleared as medical devices; when used for clinical decisions, you risk stepping into regulated territory. Policies should require approved, fit-for-purpose tools when care is involved.
For context, see the FDA's work on AI/ML-enabled medical devices and WHO guidance on AI ethics in health. Use these as anchors when writing your governance policy.
A simple adoption plan for healthcare teams
- 1) Identify low-risk wins: Content drafting, policy summaries, and admin tasks with no PHI.
- 2) Classify risk: Tag use cases by impact on patient safety, privacy, and workflow.
- 3) Choose the right tool: Prefer enterprise models with guardrails; avoid public endpoints for any sensitive work.
- 4) Build prompts and templates: Standardize instructions, require citations, and embed disclaimers.
- 5) Validate: Compare outputs against gold standards; document failure modes and limits.
- 6) Train users: Short courses, scenario drills, and a quick-reference policy. Consider role-based learning paths.
- 7) Monitor and iterate: Track errors, collect feedback, and refine approved use cases over time.
Need structured upskilling for different roles? See role-based options here: AI courses by job.
Event details and speakers
This webcast unpacks real cases, technical weak spots, and practical safeguards so your teams can use AI chatbots responsibly-without putting patients or your organization at risk.
Register Now
Moderator
Rob Schluth
Principal Project Officer I, Device Safety, ECRI
Rob focuses on content development and program management for ECRI's Device Safety group. Across 30 years, he has contributed to hundreds of device evaluations, problem reports, and guidance articles spanning a wide range of technologies. He manages special initiatives for the device evaluation and safety team and leads development of the annual Top 10 Health Technology Hazards report.
Panelists
Marcus Schabacker, MD, PhD
President and Chief Executive Officer, ECRI
A board-certified anesthesiologist and intensive care specialist, Dr. Schabacker has 35 years in healthcare and 20 years in senior leadership across medical device and pharmaceutical organizations, covering medical affairs, clinical development, regulatory, quality, R&D, and patient safety. Trained at the Medical University of Lubeck, Germany, he served as senior medical officer at Mafikeng General Hospital in South Africa in a humanitarian program supporting the African National Congress government under Nelson Mandela. He is an affiliate assistant professor at The Stritch School of Medicine at Loyola University Chicago.
Francisco Rodriguez-Campos, PhD
Principal Project Officer, Device Evaluation, ECRI
Francisco evaluates medical imaging technologies such as CT and breast tomosynthesis. Previously a neuroscientist and instructor at the University of Pennsylvania, he performed image-guided surgeries (CT and MRI) for chronic implants in old-world macaques and taught medical devices in biomedical engineering. He has led technology assessment projects in El Salvador, consulted to PAHO/WHO on medical technology initiatives, and directed a clinical engineering graduate program.
Christie Bergerson, PhD
Device Safety Analyst, ECRI
Dr. Bergerson consults on medical device topics, including AI-enabled devices. Her background spans in vitro diagnostics, orthopedics, and software development, with AI connecting these domains. She has published widely and guest lectures at institutions including Johns Hopkins University and Texas A&M University.
Your membership also unlocks: