AI advice is convincing - and sometimes dangerously wrong
As AI tools slide into daily life, a Limerick pharmacist is urging residents to treat AI-generated medical advice with extreme caution. The message is simple: these tools can sound smart while being completely off-base.
People have always Googled symptoms. Now they're leaning on chatbots that don't just list sources - they paraphrase them. As she put it, "It's basically taking all of those websites and, instead of presenting you with a list of websites, it's going through all of the relevant info in them and digesting it and giving it back to you in a way that's really easy for us to consume."
The core risk: confident nonsense
Misleading information is the biggest problem - especially "AI hallucinations," where a bot delivers false guidance with absolute confidence. "A lot of people, when they've asked AI questions about taking minerals, as in mineral supplements, it was saying to eat rocks," she said.
Another case: a patient asked an AI what to do for a kidney stone and was told to drink urine. That's not just wrong - it's unsafe. The system likely mashed together multiple pieces of advice (drink lots of fluids, monitor urine) and produced a sentence that sounded coherent but wasn't correct.
Why this matters in clinical settings
AI doesn't know which questions to ask, what to clarify, or what matters in a specific case. It can't capture nuance from a patient's history, medications, or red flags that change the plan.
That gap - missing context - is where harm happens. Patients show up with confidence in an answer that never went through a clinical filter.
Data privacy: don't paste PHI into public bots
Healthcare data is sensitive. If a patient or clinician pastes personal medical information into an open chatbot, that data is now shared outside controlled systems.
If your organization uses AI, keep it inside secure, approved tools. For policy guidance, see WHO's work on ethics and governance of AI in health here.
What healthcare professionals can do now
- Ask upfront: "Did you use AI or online tools for this?" Document it. It helps explain patient beliefs and sets up education.
- Explain hallucinations in plain language: "These tools sometimes make things up and say them confidently." Keep the tone nonjudgmental.
- Clarify safe next steps: dosing, triage, and diagnosis should come from clinicians - not chatbots.
- Offer trusted resources: patient-facing leaflets, guideline-backed sites, and your practice's advice line.
- Protect privacy: never paste PHI into public chatbots. Use organization-approved, audited systems only.
- Set policy for AI use: define approved tools, human-in-the-loop review, and documentation standards.
Where AI can help - under supervision
There's real upside when AI is kept in professional hands. In the near term, it can assist with scan interpretation and help summarize clinical trials from reputable sources - with a clinician verifying the output.
If your team is building AI literacy and safe-use practices, structured training can speed that up. See curated options by role here.
Bottom line
Use technology for information, not diagnosis. Confirm AI-generated advice with a healthcare professional - always.
Your membership also unlocks: