Medical Charlatans Have Existed Through History. But AI Has Turbocharged Them
Nearly a year into parenting, I’ve leaned on advice and tricks to keep my baby alive and engaged. As my child grows curious and agile, the uncertainties of early childhood remain. When he started nursery in Berlin, other parents warned me about the inevitable wave of illnesses. Naturally, I turned to the internet for guidance—this time, to ChatGPT, despite my initial hesitation.
I asked a simple question: “How do I keep my baby healthy?” The AI’s response was practical—avoid added sugar, watch for fever, and talk often to your baby. Yet, it ended by asking for my baby’s age to provide more specific advice. That request raised my suspicion. While staying informed about my child’s health is crucial, I chose to log off, wary of the AI’s limitations.
When AI Misinforms Health Policy
A similar scenario unfolded earlier this year in the US. Amid a growing measles outbreak, children’s health became a political battleground. The Department of Health and Human Services, led by Robert F. Kennedy Jr., launched the Make America Healthy Again (Maha) commission to tackle childhood chronic diseases. The report focused on pesticides, prescription drugs, and vaccines as key threats.
What raised alarms was the presence of citation errors and unverified claims in the report. Independent researchers and journalists suspected that ChatGPT was used in its compilation. Some cited studies didn’t exist. Epidemiologist Katherine Keyes, listed as an author on one such study, confirmed the paper was fake.
AI and the Return of Medical Charlatanry
Charlatans have long exploited health fears. In the 17th and 18th centuries, untrained individuals sold questionable remedies, sometimes even obtaining licenses to do so. They used public spaces to promote products like balsamo simpatico, falsely claiming to treat venereal diseases.
Today’s AI-powered misinformation fits this old pattern but with far greater reach. Falsehoods can appear on trusted platforms or mimic scientific research, eroding trust. Kennedy’s rejection of established medical journals like The Lancet and The New England Journal of Medicine deepens the problem, especially given his influence over public health debates and funding.
Unlike science, which seeks truth, AI doesn’t distinguish fact from fiction. Its convenience tempts users to rely on it for medical advice, but this can lead to dangerous misinformation. When governments depend heavily on AI-driven reports, misleading conclusions about public health become a real risk.
The Need for AI Governance in Healthcare
Technology journalist Karen Hao pointed out the urgent question: how do we govern artificial intelligence to ensure it improves society rather than harms it? The answer lies in establishing clear policies to hold tech companies and governments accountable for AI misuse.
Individual caution helps, but broad regulatory frameworks are essential. Without them, we risk normalizing a new form of charlatanry powered by AI, where truth becomes a casualty.
For professionals interested in responsible AI use in healthcare, exploring targeted educational resources can build essential skills for this evolving landscape. Find relevant courses and certifications at Complete AI Training.
Your membership also unlocks: