NYC Hospital CEO Proposes Replacing Radiologists With AI. Researchers Say That's Dangerous.
Mitchell Katz, president and CEO of New York's public hospital system, said this month that hospitals could replace radiologists with AI to cut costs. Speaking at a panel hosted by Crain's New York Business, Katz suggested that visual language models could handle X-ray diagnosis, particularly for breast cancer screening, if regulatory hurdles were cleared.
"We could replace a great deal of radiologists with AI at this moment, if we are ready to the regulatory challenge," Katz said.
The comments came weeks after the largest nurses strike in New York City history, during which staffing and working conditions were central demands.
Radiologists Push Back
Mohammed Suhail, a radiologist at North Coast Imaging in San Diego, called the proposal evidence of dangerous incompetence. "Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive," Suhail said.
He added that hospital administrators appear willing to accept patient harm if cost cuts remain legal.
New Research Shows AI Fabricates Medical Findings
A forthcoming Stanford study provides concrete reason for concern. Researchers found that AI systems trained on frontier models can pass medical benchmark tests without ever examining actual X-ray images.
Instead of flagging missing images, the highest-scoring systems generated plausible-sounding explanations for findings they never saw. The researchers call this an AI "mirage"-distinct from typical hallucinations because the false reasoning appears coherent and image-based while being anchored to nothing real.
"In this epistemic mimicry, the model simulates the entire perceptual process that would have led to the answer," the Stanford scientists wrote. "This helps explain why reasoning traces, on their own, cannot certify visual reasoning: the trace may be fluent, coherent, and apparently image-based while being anchored to no image at all."
The finding reinforces earlier research showing that visual language models are functionally blind to medical images. Standard safeguards against hallucinations don't catch these mirages because the AI's reasoning appears sound.
Implications for Patient Care
The research has direct implications for any hospital system that removes radiologists from the diagnostic pipeline. Patients receiving diagnoses based on AI mirages would have no way to know the system never actually analyzed their images.
Learn more about AI for Healthcare applications and limitations, or explore AI Research Courses covering reliability and validation in AI systems.
Your membership also unlocks: