Health Insurers Bet on AI for Coverage Decisions, Despite Legal Risks
Health insurance executives told Wall Street this year that artificial intelligence could cut their costs by automating coverage decisions. The Trump administration is now testing AI in Medicare's prior authorization process while also pushing back against state-level AI regulations.
The shift carries real legal exposure. Class action lawsuits have accused insurers of using AI to wrongfully deny treatment. A Stanford University study warns that training AI on a system already plagued by wrongful denials could amplify the problem.
The Training Data Problem
Michelle Mello, a co-author of the Stanford research, explained the core issue: "There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system."
The study did identify potential benefits alongside the risks. But the finding underscores a fundamental tension: AI systems trained on flawed insurance practices may simply encode those flaws at scale.
Regulatory Pushback
Multiple states have moved to limit AI in insurance decisions. The Trump administration is simultaneously seeking to override those state rules, creating conflict between federal and state authority over how insurers deploy the technology.
For insurance professionals, the stakes are operational and legal. Coverage decisions made by AI systems now face scrutiny from regulators, courts, and state attorneys general. How your organization implements-or doesn't implement-AI in claims and authorization processes will likely shape competitive positioning and legal risk for years.
Learn more about AI for Insurance and AI for Healthcare applications.
Your membership also unlocks: