Deadly Denials: Red and Blue States Move to Rein In Insurer AI as the White House Pushes Preemption
AI in health insurance has become a rare bipartisan flashpoint. Florida and Maryland are cracking down. California is threading the needle. And the White House wants to stop states from setting most of the rules.
For insurance leaders, this is no longer a theoretical debate. It's an operational, compliance, and reputational risk that touches prior authorization, claims adjudication, appeals, and vendor management.
What's actually changing in policy
A December executive order from the administration seeks to preempt most state AI rules, arguing that state-by-state regulation would stifle innovation. Legal scholars question whether a president can broadly preempt states without Congress, so expect challenges.
Meanwhile, states are moving. Arizona, Maryland, Nebraska, Texas, Illinois, and California have enacted laws that constrain or oversee insurer use of AI. Rhode Island is taking another run at a bill after a near miss, and lawmakers in North Carolina have shown interest in prohibiting AI-only coverage determinations.
Florida released an "AI Bill of Rights" with limits on using AI in claims and authority for the state to inspect algorithms. California required insurers to ensure algorithms are applied fairly and equitably, even as broader disclosure mandates were vetoed.
Why this matters to insurers
Voters across parties are wary of AI, and they're already unhappy with prior authorization. That skepticism is translating into hearings, lawsuits, and bills that target algorithmic denials and opaque decisioning.
Insurers emphasize AI for speed and consistency, not denials. Still, reporting on automated triage and proxy denials has fueled scrutiny, and physician groups are pressing for guardrails. The American Medical Association has publicly backed tighter state oversight of AI in prior authorization.
The compliance gap most bills leave open
Many state proposals require a "human in the loop" but don't define meaningful review. Without strong process design, human sign-off can become rubber-stamping.
There's also a coverage gap: ERISA preempts states from regulating self-insured employer plans, leaving federal policy as the only lever there. Expect a growing patchwork for fully insured lines and continued debate over federal preemption.
Practical steps to de-risk AI in coverage and claims now
- Inventory and scope: Maintain a live registry of all models, rules engines, and heuristic systems influencing utilization management, claims edits, fraud scoring, or provider payment integrity.
- Decision architecture: Prohibit AI-only denials. Allow auto-approvals where clinically safe, and require independent clinical review for any potential denial.
- Human review standards: Define what "meaningful review" means (e.g., reviewer credentials, evidence checked, time-on-task, second-level review thresholds). Audit for compliance.
- Audit trails: Log inputs, features used, model version, reviewer ID, rationale text, and outcome. Retain for regulators and litigation readiness.
- Explainability and notices: Generate plain-language and clinical rationales that align with plan documents and medical policy. Include specific citations, not generic statements.
- Bias and fairness testing: Measure disparate impact across protected classes and proxies. Calibrate thresholds, features, and override rules to mitigate inequities.
- Prior authorization SLAs: Put guardrails on turnaround times, escalate high-risk deferrals, and monitor queue backlogs caused by AI triage.
- Appeals and peer review: Track overturn rates by model, reviewer, and condition. Use findings to retrain models and update policies.
- Vendor governance: Require right-to-audit, model documentation, change logs, and performance/bias reports. Ban AI-only denials in vendor contracts.
- Model risk management: Adopt a formal framework (model inventory, validation, monitoring, change control) aligned to insurance regulators' expectations and internal audit.
- Regulatory monitoring: Map state requirements to controls. Segment workflows for fully insured vs. self-insured lines to avoid accidental spillover.
- Controls and kill switch: Establish thresholds for pausing a model (e.g., spike in complaint rate, denial error rate, or appeal overturns), with a tested rollback plan.
- Communications playbook: Prepare templates for provider, member, and regulator inquiries that transparently describe how AI assists decisions without overpromising.
State-by-state signals worth tracking
- Disclosure mandates: Requirements to inform regulators, clinicians, and members when AI influences a decision.
- Algorithm access: Regulator inspection rights and documentation standards for third-party models.
- Prohibitions: Explicit bans on AI as the sole basis for a denial, and constraints on using non-clinical proxies.
- Fairness duties: Obligations to demonstrate equitable outcomes across populations and lines of business.
What to expect next
More bills, more hearings, and likely lawsuits over federal preemption. Even if preemption attempts stall, the political pressure won't. Expect growing demands for transparency, auditability, and proof that AI reduces delay and error rather than creating them.
Insurers that operationalize meaningful human review, bias testing, and clear rationales now will be better positioned with regulators, providers, and members-and will spend less time in depositions later.
Further reading and training
For hands-on guidance on safe deployment and controls, see AI for Insurance. If your policy or compliance team is shaping governance, align on frameworks with AI for Policy Makers.
Regulatory context: the NAIC's model bulletin on insurer AI use outlines supervisory expectations for governance and risk controls. Read the NAIC announcement.
Background on algorithmic denials and public scrutiny: ProPublica's reporting on claim review automation.
Your membership also unlocks: