Generative AI and Medical Advice: Use With Caution
Generative AI apps like ChatGPT have seen tens of millions of downloads, with many users turning to them for medical advice. However, a recent study by the University of Waterloo revealed that only a small portion of ChatGPTβs answers to open-ended medical questions were accurate or clear. Experts urge healthcare professionals to approach large language models (LLMs) carefully.
Dr. Vera Kohut, national medical director at Serefin Health, emphasized that AI is here to stay but needs to improve its accuracy to be truly reliable. She explained that LLMs mimic human conversation but lack personal context, which is critical in healthcare.
Why AI Falls Short in Medical Guidance
LLMs generate responses based on patterns learned from human-generated data. They do not have access to your personal medical history, genetics, family background, or even environmental factors like geography. This means the advice they provide is generalized and may miss crucial nuances in individual cases.
Because AI tools cannot assess risk or account for personal health details, their suggestions might be inaccurate or incomplete. This limitation makes them unsuitable as a replacement for professional medical consultation.
Red Flags to Watch For
- Single or vague diagnoses: Be cautious if an AI chatbot offers just one diagnosis without explaining alternatives or reasoning.
- Lack of urgency: If worsening symptoms donβt prompt the AI to recommend immediate medical attention, question the reliability of its advice.
- Unusual treatments: Watch out for suggestions involving supplements or therapies that seem out of place or unsupported by evidence.
How AI Can Still Be Useful
Despite limitations, AI tools have practical uses in healthcare settings. They can help patients track symptoms, organize medical appointments, and manage medications. AI can also make complex medical information easier to understand for patients.
However, privacy remains a key concern. Sharing personal health data with AI platforms may not be protected under confidentiality laws. OpenAI CEO Sam Altman has warned that data entered into ChatGPT is not legally confidential. Healthcare workers should remind patients to be cautious with sensitive information.
For healthcare professionals interested in learning more about AI applications and safe usage, resources such as Complete AI Training offer courses tailored to different skill levels and job roles.
Your membership also unlocks: