Trump Administration's 2025 AI Order Favors Federal Preemption: What It Means for Insurance

A federal AI baseline could simplify insurer compliance, yet state oversight on rates, forms, and fairness won't fade. Plan for pressure on pricing, underwriting, and claims.

Categorized in: AI News Insurance
Published on: Mar 07, 2026
Trump Administration's 2025 AI Order Favors Federal Preemption: What It Means for Insurance

Federal Preemption of State AI Rules: What It Could Mean for Insurers

A December 2025 executive order signaling national uniformity for AI would reset the compliance playbook. For insurers, the stakes are high: pricing, underwriting, claims automation, and model governance all sit in the crosshairs.

Treat this as a likely direction, not a guarantee. Plan for a federal baseline while assuming state insurance oversight remains in force.

Why this would matter for carriers

State-by-state AI rules are starting to diverge. A federal policy that preempts or limits conflicting state AI rules could simplify compliance, but only on AI-not core insurance statutes.

McCarran-Ferguson still gives states primary authority over insurance unless a federal law specifically relates to the business of insurance. So expect dual pressure: a federal AI baseline plus ongoing state reviews on rates, forms, market conduct, and unfair discrimination.

What a federal AI baseline could require

  • Risk-based AI governance: tier your systems by impact (pricing, underwriting, claim denial = high risk) and scale controls accordingly.
  • Model lifecycle controls: documented objectives, data provenance, training/evaluation records, performance monitoring, and retirement criteria.
  • Bias and fairness testing: pre-deployment and ongoing, with clear thresholds, remediation triggers, and human review for edge cases.
  • Transparency: decision notices that explain key factors, especially for adverse actions and appeal paths.
  • Incident and change management: material model changes, drift, and material incidents reported within set timelines.
  • Third-party oversight: contractual rights to audit, evidence of testing, and clear accountability for model updates.
  • Recordkeeping: audit-ready logs, datasets, test results, approvals, and version history.

Expect alignment with common risk frameworks, such as the NIST AI Risk Management Framework.

Insurance-specific impacts

  • Underwriting and pricing: Stronger documentation for factors used, justification for risk segmentation, and tests for proxy bias. Actuarial sign-off will carry more weight.
  • Rate filings: States will still expect clear exhibits. A federal baseline won't replace actuarial standards or state filing requirements.
  • Claims: Denial and SIU referrals produced by models will need human-in-the-loop, reason codes, and appealable notices.
  • Marketing and distribution: Lead scoring and quoting flows need consent, data source transparency, and guardrails against unfair discrimination.
  • Data sources: Credit-based data, telematics, wearables, and alternative data will face tighter provenance checks and FCRA-style adverse action discipline where applicable.

State oversight won't disappear

Even with federal preemption on AI methods, state DOIs remain focused on unfair discrimination, transparency, and actuarial support. Several states already echo the NAIC's direction on governance, testing, and documentation for AI systems.

For context, see NAIC's work on insurer AI oversight: NAIC Model Bulletin on the Use of AI Systems.

90-day readiness plan for carriers and MGAs

  • Inventory: List all models and automated tools used across pricing, underwriting, claims, fraud, marketing, and operations.
  • Risk-tier: Classify by customer impact and regulatory exposure. Flag anything affecting eligibility, price, or claim decisions as high risk.
  • Document: Lock down model purpose, inputs, training data sources, known limitations, monitoring metrics, and approval owners.
  • Test: Run bias, stability, and performance tests on high-risk models. Define thresholds and fix plans.
  • Controls: Add human review for adverse decisions and define appeal/escalation paths.
  • Vendors: Add AI clauses-testing evidence, change notices, audit rights, incident response, and end-of-life terms.
  • Notices: Refresh adverse action and decision explanations. Keep them plain-language and specific.
  • Playbooks: Build incident, model change, and regulatory response runbooks. Assign owners and SLAs.

Model governance checkpoints your exam team will ask for

  • Purpose and policy fit: The model's outcome aligns with filed rating plans and underwriting guidelines.
  • Data quality: Provenance, permissions, representativeness, and drift monitoring.
  • Feature controls: No protected attributes or obvious proxies; documented reasoning for inclusion/exclusion.
  • Performance: Backtests, out-of-sample tests, and stability across segments and time.
  • Fairness: Clear metrics, thresholds, remediation steps, and sign-offs before deployment.
  • Governance: Versioning, approvals, overrides, and end-user training logs.

Data and documentation you should keep

  • Datasets and lineage: Source systems, vendors, timestamps, licenses, and any transformations.
  • Model cards: Intended use, limitations, performance, fairness results, and contact owner.
  • Decision logs: Inputs, version IDs, reason codes, overrides, and final outcomes.
  • Customer notices: Templates for adverse action, explanations, and appeal instructions.
  • Vendor evidence: Test results, SOC reports if available, change notices, and incident reports.

What to ask your vendors now

  • Which models affect eligibility, price, claim payout, or fraud flags? Provide model cards and test results.
  • How do you measure bias and drift? Share thresholds, frequency, and remediation steps.
  • What events trigger a "material change" notice to us, and how fast will you notify?
  • Can we audit your training data sources and governance controls?
  • If regulators ask for evidence, what can you produce in 10 days?

Practical takeaway

Assume a federal AI floor with state insurance oversight on top. Build one governance system that satisfies both. Start with inventory, risk tiering, testing, documentation, and vendor controls. That foundation works under almost any rule set.

Go deeper


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)