Insurers Back Trump's AI Executive Order as States Decry Preemption and Funding Pressure

White House order sets a federal AI baseline for insurance and readies a litigation team to challenge conflicting state laws. Insurers cheer consistency; states push back.

Categorized in: AI News Insurance
Published on: Dec 16, 2025
Insurers Back Trump's AI Executive Order as States Decry Preemption and Funding Pressure

AI Order Puts Federal Guardrails on Insurance Tech - and Tests State Authority

Health insurers welcomed the White House's executive order to create a national approach to AI. State insurance legislators did not. The split is familiar: carriers want uniform rules; states want to keep hands-on oversight.

The order sets a minimal federal standard for AI and empowers a new litigation task force to challenge state laws that conflict with federal policy. The administration argues that a patchwork of state rules is creating conflicting obligations, interstate friction, and ideological bias in AI systems.

What the Order Actually Does

  • Preemption push: Establishes a national framework and aims to bar state laws that conflict with federal AI policy objectives.
  • Litigation task force (30 days): The attorney general must form an AI litigation group to challenge state AI statutes viewed as unconstitutional, federally preempted, or otherwise unlawful.
  • State law assessment (90 days): The federal government will publish an inventory of state AI laws and flag those targeted for litigation.
  • Funding leverage (90 days): The Department of Commerce will tie remaining BEAD funds to state AI policy; states deemed to have "onerous AI" laws could lose access to nondeployed funding, to the extent allowed under federal law. See program details from NTIA's BEAD overview here.

Health Insurer View: Consistency Over Complexity

AHIP signaled support, arguing a federal baseline lowers compliance burden as AI use grows in claims, care management, and admin workflows. The group favors high-level, flexible, risk-based guardrails that won't require constant rewrites as technology shifts.

They want regulators to protect proprietary information, rely on industry standards, and reserve audits or third-party reviews for higher-risk uses. AHIP also warned against creating new private rights of action tied to AI, noting litigation exposure could slow adoption.

State Legislator View: Local Control and Consumer Protection

NCOIL criticized the attempt to curb state authority, saying state-level policymaking responds directly to constituent concerns. The group previously pushed back on a floated 10-year pause on state AI laws and maintains that local safeguards are needed now.

From underwriting to claims and appeals, states argue AI touches market-specific issues and consumer risks differently. Their position: local variation requires local rules.

What This Means for Carriers, MGAs, and TPAs

  • Plan for two tracks: Build compliance maps for a federal baseline while maintaining state-by-state matrices until courts or Congress settle preemption.
  • Prioritize high-risk uses: Flag AI that affects eligibility, pricing, claims denials, fraud scoring, and provider network decisions. Expect scrutiny and possible audit requirements.
  • Adopt recognized standards: Align model lifecycle practices (data sourcing, bias testing, performance monitoring, human-in-the-loop) with frameworks such as NIST's AI Risk Management Framework (AI RMF).
  • Tighten documentation: Maintain model cards, decision logs, feature lists, training/testing datasets, known limitations, and mitigation steps. This pays off under both federal and state scrutiny.
  • Vendor governance: Update contracts for audit rights, transparency on model updates, explainability artifacts, security, and incident reporting. Include fallback workflows if a model must be paused.
  • Consumer fairness controls: Expand adverse action logic, appeal pathways, and human review for sensitive decisions. Keep clear disclosures on automated processing where required.
  • Litigation readiness: Coordinate with legal on anticipated state challenges. Preserve documentation that supports federal preemption arguments and risk-based controls.

Key Dates and Triggers to Watch

  • Day 30: AI litigation group formation; early signals on enforcement posture.
  • Day 90: Federal assessment of state AI laws; list of statutes likely to face challenges.
  • Day 90: Commerce guidance linking BEAD funds to state AI policy; potential funding implications for states and indirect policy pressure on regulators.

Operational Checklist for Insurance Teams

  • Create an inventory of all AI/ML systems across underwriting, claims, SIU, provider management, and customer service.
  • Classify each use case by risk to consumers and business, then assign oversight depth by tier.
  • Stand up or refine an AI governance committee with representation from compliance, legal, actuarial, clinical (for health), and IT/security.
  • Implement bias, stability, and drift testing with thresholds that trigger human review.
  • Review state AI laws impacting disclosures, adverse actions, data retention, and appeals. Prepare gap analyses against the federal baseline.
  • Refresh training for adjusters, underwriters, and care managers on appropriate AI use and escalation paths.

The Bottom Line

This order sets up a direct contest between federal uniformity and state control. Insurers get a path to consistent rules, but litigation and rulemaking will take time.

Use that window to mature AI governance, focus on high-impact decisions, and prepare for audits from either side. If the federal standard prevails, you're ready. If states hold ground, you're still covered.

Looking to upskill your teams on practical AI workflows and controls? Explore role-based options at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide