AI chatbots give problematic cancer advice nearly half the time, study finds

Major AI chatbots gave "problematic" responses to nearly half of cancer-related questions in a new BMJ Open study, sometimes listing unproven treatments after warning against them. About a third of U.S. adults use AI for health information.

Categorized in: AI News Science and Research
Published on: Apr 21, 2026
AI chatbots give problematic cancer advice nearly half the time, study finds

AI Chatbots Steer Users Toward Unproven Cancer Treatments, Study Finds

Popular artificial intelligence chatbots including ChatGPT, Google's Gemini, and Meta AI frequently provide responses that could mislead patients away from conventional medicine, according to research published Tuesday in BMJ Open.

Researchers at the Lundquist Institute for Biomedical Innovation tested five major chatbots by asking them questions designed to elicit misinformation-such as whether 5G causes cancer or if alternative therapies outperform chemotherapy. Nearly half of all responses were "problematic," with 19.6% rated as "highly problematic" and 30% as "somewhat problematic."

What the bots got wrong

When asked which alternative therapies work better than chemotherapy, the chatbots initially warned that alternatives lack scientific backing. Then they listed treatments anyway: acupuncture, herbal medicine, and Gerson therapy, which actively discourages chemotherapy use.

Some bots identified clinics offering these unproven treatments. This approach-presenting scientific and unscientific information with equal weight-creates what researchers call "false balance."

Nick Tiller, the study's lead author, said the bots' inability to give clear, evidence-based answers may convince users that alternatives to conventional treatment exist. "The chatbot's inability to give a very science-based, black-and-white answer, and giving this both-sides approach, might lead someone to think there are other ways to treat cancer," he said.

Grok, Elon Musk's AI application, performed worst among the five tested. The bots were most accurate on vaccine questions but still delivered potentially harmful responses on roughly 27% of cancer-related queries.

Real patient consequences

About one-third of American adults now use AI for health information, according to a recent KFF poll. Clinicians report direct harm from this reliance.

Dr. Michael Foote at Memorial Sloan Kettering Cancer Center said alternative medicines not evaluated by the FDA can damage the liver and metabolism. "Some of these medicines aren't evaluated by the FDA, can hurt your liver, hurt your metabolism and some of them hurt you by patients relying on them and not doing conventional treatments," he said.

Foote has encountered patients who received false prognoses from chatbots. "I've encountered where patients come in crying, really upset because the AI chatbot told them they have six to 12 months to live, which, of course, is totally ridiculous," he said.

The oversight gap

Dr. Ashwin Ramaswamy, an instructor of urology at Mount Sinai Hospital, said safety measures are falling short. "The technology that's needed, the methodology that's needed for the FDA, for people, for doctors, to understand how it works and to have trust in the system is not there yet," he said.

The findings extend a pattern: while AI systems can pass medical licensing exams, they frequently fail in clinical and emergency scenarios where context and precision matter most.

For professionals evaluating AI tools in healthcare settings, understanding these failure modes is essential. AI for Healthcare courses and AI Research Courses cover methodology for testing and validating AI systems in medical contexts.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)