Experts warn AI chatbots in healthcare risk patient safety as adoption grows

Physicians' use of AI jumped from 38% to 81% in a single year, but the technology can generate false medical information with no legal accountability when it causes harm. Experts say AI must support doctors, not replace them.

Categorized in: AI News Healthcare
Published on: Apr 16, 2026
Experts warn AI chatbots in healthcare risk patient safety as adoption grows

AI Accuracy Concerns Mount as Healthcare Adoption Accelerates

Physicians are turning to artificial intelligence at unprecedented rates, but experts warn the technology poses serious risks to patient safety without clear accountability structures. An American Medical Association survey found 81% of physicians used AI in March 2023, up from 38% just a year earlier.

The core problem: AI chatbots and large language models can generate plausible-sounding but entirely false medical information. Christabel Randolph, associate director at the Center for AI and Digital Policy, said these systems "can be confidently wrong" because they generate responses based on probability, not medical knowledge or patient history.

Unlike doctors, AI systems face no legal or professional consequences for bad advice. When a physician gives incorrect guidance, patients can pursue malpractice claims and regulators can revoke licenses. "When an AI gives you bad advice - who's responsible?" Randolph said.

The Risk to Patients

Patients relying on AI for medical advice risk delaying necessary care or taking wrong medications based on inaccurate information. Some policy experts worry doctors will increasingly outsource clinical decisions to algorithms, further distancing physicians from direct patient care.

The expansion of AI is also inflating healthcare costs. Research shows AI-driven billing practices may be generating diagnoses without corresponding treatment, raising expenses rather than reducing them.

Where AI Shows Promise

AI does offer legitimate benefits in healthcare operations. The technology can streamline administrative work and paperwork, freeing physicians to spend more time with patients. Deep learning tools can help doctors diagnose rare conditions by analyzing large databases of cases and research that would take humans months to review.

Some economists argue AI could lower overall healthcare costs and expand patient access. Jeffrey Singer, senior fellow at the Cato Institute, said the technology could give patients more control over their care rather than forcing them to navigate restrictive appointment systems and licensing barriers.

What Regulators Are Doing

The Centers for Medicare and Medicaid Services announced a digital health ecosystem initiative in July 2025 to coordinate AI deployment across healthcare. President Trump unveiled a national legislative framework for advancing AI in March 2026.

The Biden administration planned to release federal guidelines on responsible AI development and deployment in healthcare by the end of 2026.

The Bottom Line

Healthcare organizations adopting AI for healthcare must build accountability mechanisms and human oversight into any system that touches patient care. The technology works best augmenting clinical judgment, not replacing it. Without clear responsibility structures and accuracy standards, AI poses real risks to patient safety.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)