Health insurers use AI to deny care with little human oversight, doctors warn

Health insurers are using AI to process prior authorization decisions at speeds that make meaningful physician review unlikely. Sixty-one percent of doctors surveyed by the AMA fear unregulated payer AI has increased claim denials.

Categorized in: AI News Healthcare
Published on: May 13, 2026
Health insurers use AI to deny care with little human oversight, doctors warn

AI Is Becoming Healthcare's Gatekeeper. That's a Problem.

One-third of U.S. adults have used AI to find health information. More than 40% of those who use health AI say they've uploaded personal medical information into an AI tool. In hospitals and doctor's offices, the technology now writes clinical notes, triages messages, predicts readmissions, and makes prescription decisions that physicians once made alone.

The tools work. Ambient clinical intelligence has reduced documentation burden and, in many cases, improved clinician experience. But healthcare organizations face a harder question: how should AI actually be used?

The answer matters because insurers are deploying AI differently than hospitals are. Health insurers increasingly use algorithmic tools to evaluate claims, guide prior authorization decisions, and predict what care patients "should" need. The stated goal is efficiency and cost control. The effect, according to physicians, is denial.

Physicians Fear AI-Driven Denials

Sixty-one percent of physicians surveyed by the American Medical Association said they fear that payers' unregulated AI use has increased or will increase prior authorization denials. Some denials are processed at speeds that make meaningful physician review unlikely, according to reported cases.

The concern is not theoretical. Michelle Mello, a health law scholar at Stanford, wrote that "wrongful denials may be occurring as a result of a lack of meaningful human review of recommendations made by AI."

This matters because care decisions are not probabilistic exercises. AI systems are trained on large populations, optimized for pattern recognition, and designed to generate recommendations based on statistical likelihood. When insurers treat individual patients as data points rather than people, the system breaks.

Patients are more than the sum of a few numbers. They do not always map cleanly to a model.

The Difference Between Tool and Gatekeeper

The core issue is not whether AI belongs in healthcare. It is how it is used.

A tool supports human judgment. A gatekeeper replaces it. Insurers are quietly shifting toward the latter.

The Centers for Medicare & Medicaid Services has stated that algorithms may assist in coverage determinations but cannot override individual patient circumstances or substitute for clinical judgment. The AMA has called for greater oversight, emphasizing transparency, bias mitigation, and mandatory human review in decisions that affect patient care.

Patients are pushing back too, filing lawsuits that allege AI-driven denials have inappropriately blocked care.

Liability Remains Unclear

These disputes have exposed a gap in the legal system. If a clinician follows an AI recommendation that leads to a poor outcome, whose error is it? If an insurer uses AI to deny care and that denial causes harm, who is accountable?

Traditional malpractice law assumes a human decision-maker. AI eliminates that feature. The healthcare system has no clear framework for liability when algorithms contribute to bad outcomes.

As one analysis from Harvard's Petrie-Flom Center noted, the legal framework for AI has not caught up to the technology because the technology itself is often opaque, difficult to interrogate, and constantly evolving.

What Regulation Should Look Like

In clinical settings, the path forward is clear: AI should augment, not replace, clinician judgment. Outputs should be reviewable, explainable, and contestable. The clinician should remain the final decision-maker.

In the payer environment, standards need to be just as strong, if not stronger. Coverage decisions cannot be reduced to algorithmic outputs without meaningful human oversight. Models must be transparent, validated against clinical standards, and monitored for bias. There must be clear accountability when decisions affect access to care.

Without these boundaries, the system risks building a structure where decisions that shape access to care are made by tools that no one fully understands and no one fully owns.

AI is not the future of healthcare. It is the present. The question now is whether it serves patients or whether patients are forced to serve the system.

The window to decide is still open. But it is closing.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)