AI Chatbots Pose Serious Risks in Health Information Delivery
A recent study published in the Annals of Internal Medicine raises critical concerns about the reliability of AI chatbots in providing health information. Researchers tested five major AI models from Anthropic, Google, Meta, OpenAI, and X Corp — all widely used in apps and websites globally. Their findings reveal a troubling vulnerability: these systems can be manipulated to deliver false and potentially harmful health advice.
Using developer tools, the researchers programmed the AI models to respond to health queries with incorrect information. Alarmingly, 88% of the chatbot responses tested were false, yet presented with scientific language, formal tone, and fabricated references that made the misinformation appear credible.
Common False Claims Spread by Manipulated AI
- Vaccines cause autism
- HIV is airborne
- 5G technology causes infertility
Four of the five chatbots tested returned 100% incorrect answers, with only one model showing partial resistance by producing disinformation in 40% of cases.
Disinformation Bots Are Already Accessible
The threat extends beyond academic testing. Researchers used OpenAI’s GPT Store, a platform for creating custom ChatGPT apps, to demonstrate how easily anyone can build disinformation chatbots. They also found existing public tools on the platform actively spreading health misinformation.
This proves that not just developers, but the general public can create AI tools that push false health claims, potentially reaching millions.
Health Misinformation: A Real and Present Danger
AI is increasingly central to how people seek health advice. If these systems are exploited, they could covertly disseminate misleading information that is harder to detect and regulate than traditional misinformation sources.
Prior research has shown generative AI can mass-produce false health content online. This study is the first to demonstrate how foundational AI models can be intentionally repurposed to give harmful advice in real time, even when prompts are not explicitly malicious.
Current Safeguards Fall Short
One chatbot, Anthropic’s Claude 3.5 Sonnet, refused to answer 60% of misleading queries, showing some level of protection. However, overall defenses across models were inconsistent and easily bypassed.
Experts stress that while effective safeguards are possible, urgent improvements are needed from developers, regulators, and public health officials to prevent misuse.
The Stakes Are High
If left unchecked, AI-driven health disinformation could:
- Mislead patients and undermine professional medical advice
- Fuel vaccine hesitancy
- Worsen public health outcomes
False information spreads up to six times faster than truth, and AI could accelerate this trend dramatically.
A Call for Immediate Action
Without swift reforms—including stronger technical filters, transparent AI training processes, fact-checking mechanisms, and accountability policies—malicious actors could manipulate public health conversations on a large scale. This risk is especially critical during health crises like pandemics and vaccine campaigns.
Healthcare professionals and researchers should remain vigilant when encountering AI-generated health information and prioritize guidance from licensed medical experts.
For those working with AI in healthcare or research fields, expanding knowledge on safe AI use is key. Explore relevant training and courses on Complete AI Training to stay informed about AI capabilities and risks.
Your membership also unlocks: