12 States Test How Insurers Use AI in Pilot Aimed at Uniform Standards

A 12-state pilot will examine how insurers use AI in underwriting, pricing, claims, and marketing. So tighten governance, fairness tests, documentation, and vendor oversight now.

Categorized in: AI News Insurance
Published on: Mar 13, 2026
12 States Test How Insurers Use AI in Pilot Aimed at Uniform Standards

12-state AI oversight pilot: what insurers need to know and do now

A pilot program rolling out across 12 states is set to evaluate how insurers use AI systems. The goal: move the market closer to uniform standards for assessing models, controls, and outcomes. If your company uses algorithms in underwriting, pricing, claims, fraud, or marketing, this is your early warning to tighten governance before examiners come knocking.

This isn't theoretical. Regulators are building a common playbook for how to review AI in insurance, with a clear focus on consumer protection, fairness, and accountability. Expect data calls, questionnaires, and targeted exams that go deeper than a model write-up or vendor brochure.

Why this matters now

Regulatory expectations are converging. States have already signaled priorities around governance, bias mitigation, transparency, and third-party oversight. For context, see the NAIC's AI principles that many departments reference as a north star for responsible use of algorithms and models.

NAIC AI Principles and laws like Colorado's SB21-169 show where reviews are headed: demonstrable controls, measurable outcomes, and documentation that examiners can test.

What regulators will likely examine

  • AI governance: Board and executive accountability, defined roles, policies, and risk appetite specific to AI.
  • Model inventory: A current, complete list of models and tools, including vendor and low/no-code solutions used by business teams.
  • Use cases and scope: Where AI touches consumers (underwriting, pricing, claims, marketing, fraud) and decision criticality.
  • Data management: Sources (internal, third-party, ECDIS), lineage, quality checks, and suitability for insurance use.
  • Fairness and bias testing: Methods, frequency, cohorts evaluated, thresholds, and remediation workflow.
  • Explainability: How decisions are explained internally and to consumers; clarity of adverse action reasons.
  • Model development and validation: Documentation, independent review, performance metrics, and periodic revalidation.
  • Monitoring and drift: Ongoing controls for accuracy, stability, and unintended impacts.
  • Change management: Version control, approvals, and rollback plans for models and features.
  • Vendor oversight: Due diligence, contractual rights to audit, model change notifications, and SOC/pen-test evidence.
  • Consumer impact controls: Complaint monitoring, escalations, and root-cause fixes tied to AI use.
  • Recordkeeping: Evidence that supports any decision you make today, months from now, in an exam.

The minimum documentation you should have on hand

  • AI and model risk policy; standards for fairness, explainability, and testing.
  • End-to-end model documentation: objective, design, features, training data, limitations, and known risks.
  • Validation reports and QA results, including fairness analyses and performance benchmarks.
  • Monitoring dashboards with thresholds, alerts, and incident logs.
  • Consumer-facing notice language and adverse action templates tied to actual reasons.
  • Third-party contracts, due diligence files, and attestations related to data and models.

Action plan for the next 90 days

  • Name an accountable owner: One executive with authority over AI risk, reporting to the board or a risk committee.
  • Inventory everything: Centralize all AI/ML models, scoring tools, and rule engines-including those embedded in vendor platforms.
  • Risk-rate use cases: Rank by consumer impact and decision criticality; prioritize reviews for underwriting, pricing, and claims.
  • Stand up fairness testing: Define cohorts, metrics, thresholds, frequency, and remediation triggers; document every decision.
  • Tighten explainability: Ensure internal interpretability and consumer-ready reasons match how the model actually works.
  • Lock down change control: No model moves to prod without approvals, rollback plans, and updated documentation.
  • Upgrade vendor oversight: Add rights to audit, model change notifications, and data provenance obligations to contracts.
  • Train your teams: Underwriting, claims, and compliance should know the policy, the controls, and their roles in exams.
  • Prepare an exam pack: A ready-to-send folder with policies, inventories, validation reports, monitoring evidence, and sample decision files.

Questions you should be ready to answer

  • Which models affect consumer outcomes, and who approved them?
  • How do you test for unfair discrimination, how often, and what happens when thresholds are breached?
  • What data sources and proxies are used, and why are they appropriate for insurance decisions?
  • How do you explain decisions to consumers, and do adverse action reasons align with model features?
  • What changed in the last 12 months, and how did you validate the impact before deployment?
  • How do you oversee vendors and verify their controls rather than just accept attestations?

Impacts by function

  • Underwriting and pricing: Highest scrutiny; be precise about features, proxies, and fairness controls.
  • Claims: Triage, SIU referrals, and severity predictions need explainability and human-in-the-loop checkpoints.
  • Marketing and distribution: Targeting models should avoid unfair exclusion and respect data use limitations.
  • Fraud: Strong performance is good, but keep an audit trail and confirm non-discriminatory signals.

Third-party and embedded models

If a vendor's tool drives your decision, you still own the outcome. Examiners will expect actual proof-not just a slide deck-that the tool performs as intended and treats consumers fairly.

  • Require model and data transparency appropriate to risk level; escalate if the vendor is a "black box."
  • Obtain change logs, version notices, and a process to re-validate after material updates.
  • Map vendor controls to your policy; close gaps with compensating controls or stop-use criteria.

If you find gaps

Document them, assign owners, and publish a remediation plan with dates. Reduce exposure quickly by throttling or disabling high-risk features while you fix root causes. Report material items to your risk or board committee and track through closure.

What to expect next

As the 12-state pilot matures, expect more consistent exam procedures and clearer expectations. That's good news if you build the basics now: governance, testing, documentation, and strong vendor oversight. The companies that treat this as a standing control program-not a one-off-will move through exams faster and with fewer surprises.

Further resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)