UCLA physician calls for federal safeguards before wider adoption of AI in healthcare

A UCLA emergency physician says AI could ease doctor shortages and cut costs, but only after federal safety standards are set. Without clear rules on testing and liability, patients risk acting on confident-sounding but wrong answers.

Categorized in: AI News Healthcare
Published on: Apr 28, 2026
UCLA physician calls for federal safeguards before wider adoption of AI in healthcare

UCLA Physician Calls for AI Safeguards Before Healthcare Expansion

A UCLA emergency medicine physician argued that artificial intelligence could help address doctor shortages and high healthcare costs, but only if federal standards and accountability measures are in place first.

Dr. Hashem Zikry wrote in the Los Angeles Times that many Americans are already using AI tools for medical guidance. He said the technology could handle routine tasks like prescription refills and common diagnoses, reducing pressure on an overburdened healthcare system.

But Zikry emphasized a critical condition: clinical AI must be proven safe and effective before wider adoption. Without clear federal standards, he argued, patients and providers lack the assurance that these tools will deliver reliable results.

The Current State of AI in Medicine

Patients facing long wait times and limited access to care have begun turning to AI on their own. This trend reflects real gaps in the healthcare system-not a sign that the technology is ready for clinical use.

The gap between patient demand and clinical readiness creates risk. Zikry's position aligns with broader concerns in medicine: AI tools can produce confident-sounding answers that may be incorrect, potentially leading patients astray.

What Needs to Happen Next

Federal oversight is essential. Standards should define how AI systems are tested, what accuracy thresholds they must meet, and how providers should integrate them into patient care.

Accountability measures must specify who is responsible when something goes wrong-the developer, the healthcare provider, or both. Without clarity, liability questions could slow adoption even for tools that work well.

Zikry's argument reflects a practical reality for healthcare professionals: AI's potential to expand access is real, but rushing deployment without safeguards could create new problems faster than it solves existing ones.

For professionals working in healthcare, understanding both the promise and the preconditions matters. AI for Healthcare resources can help clinicians stay informed as these tools develop and regulations take shape.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)