Google’s Healthcare AI Invents a Nonexistent Brain Part — What If Doctors Don’t Catch It?
Imagine a radiologist examining your brain scan and spotting an abnormality in the basal ganglia, a critical brain area involved in motor control and emotional processing. The name closely resembles the basilar artery, which supplies blood to the brainstem, but these are distinct parts requiring different treatments.
Now picture an AI model reading your scan and reporting an issue in the “basilar ganglia”—a body part that doesn’t exist. This exact error was produced by Google’s healthcare AI system, Med-Gemini, yet it slipped past unnoticed in official research and promotional materials. The company called it a mere typo, but many experts warn this signals serious risks in using AI for medical diagnoses.
Med-Gemini’s “Basilar Ganglia” Error
Med-Gemini is a suite of AI models designed to summarize health data, generate radiology reports, and analyze electronic health records. A 2024 research paper introducing Med-Gemini included a diagnosis of an “old left basilar ganglia infarct”—an anatomical impossibility. The blog post and paper initially published this mistake, which only got quietly edited in the blog post after an expert raised concerns. The research paper itself remains unchanged.
Google downplayed the issue as a "simple misspelling" of “basal ganglia,” but medical professionals emphasize the danger of such errors. Even small misnomers can lead to critical misunderstandings in diagnosis and treatment.
Why This Error Matters
Using AI in clinical settings demands accuracy far beyond typical software applications. When AI hallucinates or invents terms, clinicians relying on its output might miss errors, especially if the AI sounds confident. This phenomenon, known as automation bias, creates a risk where clinicians accept AI-generated information without sufficient skepticism.
Maulin Shah, Chief Medical Information Officer at Providence, explains that even a two-letter difference is significant in medicine. He warns that errors can propagate when AI learns from incorrect inputs, compounding mistakes over time.
More Than Just a Typo: The Risk of AI Hallucinations
Following Med-Gemini, Google introduced MedGemma, another healthcare AI model with similar challenges. Researchers observed that slight changes in how questions were phrased led to varying, sometimes incorrect, diagnostic answers.
- In one case, MedGemma accurately diagnosed a rib X-ray issue with a detailed prompt but missed it entirely when asked a simpler question.
- In another, the model hallucinated multiple diagnoses when queried differently about an X-ray showing pneumoperitoneum.
These examples highlight the inconsistency and hallucination risks in AI models that can dangerously affect clinical decisions.
Addressing AI’s Confabulation in Healthcare
Shah suggests the industry should focus on AI augmenting healthcare professionals, not replacing them. He advocates for real-time hallucination detection, where one AI monitors another to flag or block dubious outputs.
He compares AI hallucinations to "confabulation" in dementia—a patient fabricating plausible but false memories. Similarly, AI can generate convincing but incorrect data, making it harder to detect errors without careful review.
Dr. Judy Gichoya from Emory University echoes this concern, noting that AI models often don’t admit uncertainty, a critical flaw in medicine where accuracy is paramount. She warns that radiologists cannot work effectively if AI-generated reports contain frequent hallucinations.
Healthcare AI Is Not Yet Ready for Prime Time
Dr. Jonathan Chen from Stanford Medicine describes this moment as a “weird threshold” where AI is being integrated into clinical care too early. The “basilar ganglia” error might seem minor but reflects deeper problems requiring urgent attention.
The core issue is not only that AI systems sometimes err, but that they present false information with unwarranted confidence—potentially misleading clinicians and putting patients at risk.
What Healthcare Professionals Should Consider
- Maintain skepticism: Always critically evaluate AI outputs, regardless of how reliable they seem.
- Double-check AI-generated notes: Errors can propagate if unchecked, especially in electronic health records.
- Use AI as a tool for augmentation: AI should assist clinicians, not replace their judgment.
- Advocate for transparency: Demand clear disclosures about AI limitations and error rates from vendors.
As AI takes on more roles in medicine, clinicians must stay informed and cautious. For those looking to deepen their understanding of AI in healthcare and how to use it responsibly, exploring comprehensive courses can be valuable. Check out Complete AI Training’s healthcare-focused AI courses for practical knowledge on integrating AI safely into medical practice.
In the end, AI is a powerful aid but not infallible. Healthcare professionals must lead with vigilance and critical thinking to ensure patient safety remains paramount.
Your membership also unlocks: