Study Finds AI Chatbots Often Provide Less Accurate Medical Advice Than Other Sources

A study shows AI chatbots often provide incomplete or less accurate medical advice than traditional sources. Healthcare professionals should address patient misconceptions and promote verified information.

Categorized in: AI News Healthcare
Published on: May 10, 2025
Study Finds AI Chatbots Often Provide Less Accurate Medical Advice Than Other Sources

Evaluating Medical Advice from AI Chatbots: What Healthcare Professionals Need to Know

AI chatbots like ChatGPT have become a common first stop for people seeking quick medical advice. While convenient, a recent study involving over 1,200 participants reveals important limitations in the accuracy of medical information provided by these tools. This insight is critical for healthcare professionals who may encounter patients relying on AI-generated advice.

Study Overview

A team of researchers from the U.K. and the U.S. conducted a controlled study to assess how well large language models (LLMs) deliver medical guidance compared to traditional sources. Volunteers were randomly assigned to use AI chatbots—including Command R+, Llama 3, and GPT-4o—or their usual resources like online searches and personal knowledge when posed with medical scenarios.

All chatbot interactions were recorded and later analyzed for accuracy and relevance of the advice given.

Key Findings

  • Incomplete Queries Hinder Chatbot Effectiveness: Many volunteers failed to provide sufficient detail in their questions, limiting the chatbot's ability to fully understand the medical issue.
  • Mixed Accuracy Compared to Other Sources: Chatbot advice was sometimes comparable to information found on medical websites or common-sense reasoning but was often less accurate.
  • Reduced Diagnostic Accuracy: Participants using chatbots were more likely to misidentify their ailments and underestimate symptom severity.
  • No Clear Advantage for AI Chatbots: The study did not find instances where chatbots consistently outperformed traditional information sources.

Implications for Healthcare Workers

The findings highlight the risks of relying on AI chatbots for medical guidance without professional consultation. Patients may receive incomplete or misleading advice, which can delay proper diagnosis and treatment. Healthcare providers should be prepared to address misconceptions stemming from chatbot use and encourage patients to seek verified information.

Clear communication and patient education remain essential to ensure that AI tools complement rather than compromise healthcare outcomes.

Further Reading

For healthcare professionals interested in AI applications and training, explore AI courses tailored by job role that cover practical aspects of AI in healthcare settings.

The full study is available on the arXiv preprint server under the title Clinical knowledge in LLMs does not translate to human interactions.