Researchers invent fake disease to show how AI spreads false medical information

Researchers invented a fake disease called bixonimania and watched major AI chatbots repeat it as fact within weeks. The experiment shows how easily false medical information spreads when it mimics academic writing.

Categorized in: AI News Science and Research
Published on: Apr 12, 2026
Researchers invent fake disease to show how AI spreads false medical information

Fake Disease Spreads Through AI Chatbots in Research Experiment

Researchers at the University of Gothenburg created a fictional illness called bixonimania and watched major AI chatbots repeat it as fact within weeks. The experiment reveals how easily language models absorb and amplify false medical information found online.

Medical researcher Almira Osmanovic ThunstrΓΆm wrote two deliberately fake academic papers about the condition in early 2024. The papers contained obvious fictional markers: a made-up author, a nonexistent university, and references to Starfleet Academy.

Bixonimania does not exist in real medicine. The fake studies described it as a skin condition linked to blue light from digital screens, causing dark or pinkish discoloration around the eyes.

How AI Systems Repeated the False Information

Several major chatbots began describing bixonimania as a real illness and offered health advice to users. Some systems told people it was a rare condition caused by screen exposure and recommended visiting eye specialists.

The fake papers looked professional. They were formatted like legitimate medical research, which made them appear more trustworthy to AI models. Mahmud Omar at Harvard Medical School said AI systems are more likely to expand on false information when it resembles formal academic writing.

The experiment had an unexpected consequence: other researchers cited the invented disease in real academic work. One study published in the journal Cureus referenced bixonimania as genuine research. The article was later retracted after editors discovered it cited a fictional illness.

Why This Matters for Your Work

As AI becomes more common in health advice and research, false information could spread quickly if systems absorb unreliable material. The problem extends beyond a single experiment or a single fake disease.

An OpenAI spokesperson said current models provide safer and more accurate health information than earlier versions. Google said older models produced the problematic responses and that its AI tools encourage users to verify sensitive information with professionals.

Experts say AI systems produce different answers depending on how questions are asked and what information they retrieve from the internet. Understanding these limitations is critical for anyone working with or relying on AI-generated content.

Learn more about how generative AI and large language models work, and explore AI research courses to understand the reliability challenges these systems face.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)