Insurers Turn to AI for Coverage Decisions as Risks to Patients Mount
Major health insurers are deploying artificial intelligence to make coverage decisions, with executives telling Wall Street analysts the move will cut costs. The Trump administration is testing AI to manage prior authorizations in Medicare and seeking to override state regulations on the technology.
But the strategy carries significant risks. Class action lawsuits have accused insurers of using AI to wrongfully deny treatment. New research from Stanford University warns that training AI on a system already plagued by improper denials could amplify those problems.
The Data Problem
Michelle Mello, a co-author of the Stanford study, described the core issue: "There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system."
The research team also identified potential benefits alongside the risks. But the warning is clear - feeding AI systems data from a flawed approval process risks automating those same flaws at scale.
What This Means for Insurance Professionals
Insurance workers managing claims and coverage decisions face pressure to adopt AI tools while regulators in multiple states push back. Red and blue states alike have moved to limit AI in insurance, while federal officials seek to preempt those restrictions.
The tension between cost savings and patient harm is shaping how coverage decisions will be made. Understanding both the capabilities and limitations of these systems is becoming essential for anyone working in health insurance.
Learn more about AI for Insurance and AI for Healthcare to stay current on these developments.
Your membership also unlocks: