When Algorithms Underwrite, Regulators Demand Explanations

Insurers face rising scrutiny as regulators demand explainable, fair, accountable AI. Expect human oversight, bias testing, documentation, and vendor controls to be mandatory.

Categorized in: AI News Insurance
Published on: Oct 10, 2025
When Algorithms Underwrite, Regulators Demand Explanations

When Algorithms Underwrite: Insurance Regulators Demanding Explainable AI Systems

AI now touches every controlled process in insurance: underwriting, pricing, claims, fraud and service. That reach brings scrutiny. Regulators, consumer advocates and courts want decisions that are explainable, fair and accountable - especially adverse ones.

The message is clear: automation without accountability creates risk. Expect human oversight, transparency and procedural fairness to be non-negotiable. The compliance bar is rising.

NAIC Model AI Bulletin: What It Expects

In 2023, the NAIC issued its Model Bulletin on AI use by insurers, a blueprint many states are following. If you use AI, build your program around these expectations:

  • Documented governance: Cover development, acquisition, deployment and monitoring - including third-party tools.
  • Transparency and explainability: Be able to explain how inputs lead to outputs or decisions.
  • Consumer notice: Disclose AI use with details appropriate to the lifecycle phase (quote, underwriting, claims, etc.).
  • Fairness and nondiscrimination: Test for bias and unfair discrimination and fix issues proactively.
  • Risk-based oversight: Apply tighter controls for high-impact decisions like rate setting, denials or rescissions.
  • Internal controls and auditability: Independent validation, periodic reviews and performance monitoring over time.
  • Third-party vendor management: You remain responsible for vendor systems; require due diligence and contractual safeguards.

NAIC resources on AI

Key State Rules You Need to Internalize

  • New York (DFS Circular Letter 2024-7): You must show AI and external data are not proxies for protected classes and do not create disproportionate adverse effects. Maintain explainability for adverse outcomes, bias tests, internal logs and allow DFS to review vendor tools and audits. Read the DFS letter
  • Colorado (C.R.S. ยง10-3-1104.9): Bans unfair discrimination arising from external consumer data and predictive models. Life insurers must perform quantitative disparate impact testing even for neutral inputs. Effective October 15, 2025, this expands to private passenger auto and health benefit plans.
  • California (H&S Code ยง1367.01 / Ins. Code ยง10123.135): Health plans and disability insurers cannot rely solely on automated tools for health care decisions; adverse determinations require review by a licensed clinician. Disclosure of AI contribution and accessible appeals are required.

What Regulators Will Ask For

  • Proof your models do not cause unfair discrimination, including disparate impact analysis.
  • Clear explanations for adverse actions, tied to inputs, thresholds and decision logic.
  • Human review for high-stakes decisions and documented escalation paths.
  • Complete audit trails: data lineage, model versions, change logs and performance drift controls.
  • Vendor diligence: access to logic where feasible, bias audits, contractual safeguards and the right to review.

Practical Steps to Get Compliant Now

  • Inventory and risk triage: Catalog every model across underwriting, pricing, claims, fraud, service and marketing. Rank by impact, harm potential, opacity and reliance on external data.
  • Defensible by documentation: For each system, keep purpose, data sources, variable descriptions, performance metrics, drift controls, validation, versioning and change logs.
  • Validation and bias testing: Run fairness assessments, proxy checks, sensitivity analysis, error audits and stress tests. Define action thresholds for remediation or disablement and keep full reports.
  • Vendor management: Require lawful data sourcing, access to model logic where possible and indemnities. Demand warranties that training inputs exclude questionable repositories and gray-market datasets.
  • Explainability infrastructure: Use reasoning modules and feature attribution (e.g., LIME) or surrogate models. Keep trace logs that map outputs back to inputs, logic and thresholds.
  • Regulatory filings: In states with AI rules, file or certify usage as required. Prepare a regulator-ready package: validation results, bias reports, oversight plans, vendor audits and explanation procedures.
  • Governance and compliance: Establish board oversight, executive committees, business owners, model risk and compliance roles. Align AI use with Unfair Trade Practices, Unfair Claims Settlement, Corporate Governance/Disclosure Acts, state rating laws and market conduct requirements.

Insurance Departments That Have Adopted the NAIC Model AI Bulletin (Full or Similar)

  • Alaska - Bulletin B 24-01 - February 1, 2024
  • Arkansas - Bulletin 13-2024 - July 31, 2024
  • California - Bulletin 2022-5 - June 30, 2022
  • Colorado - 3 CCR 702-10 - November 13, 2023
  • Connecticut - Bulletin No. MC-25 - February 26, 2024
  • Delaware - Domestic and Foreign Bulletin No. 148 - February 5, 2025
  • District of Columbia - Bulletin 24-IB-002-05/21 - May 21, 2024
  • Illinois - Company Bulletin 2024-08 - March 13, 2024
  • Iowa - Insurance Division Bulletin 24-04 - November 7, 2024
  • Kentucky - Bulletin No. 2024-02 - April 16, 2024
  • Maryland - Bulletin No. 24-11 - April 22, 2024
  • Massachusetts - Bulletin No. 2024-10 - December 9, 2024
  • Michigan - Bulletin 2024-20-INS - August 7, 2024
  • Nebraska - Insurance Guidance Document No. IGD - - H1 - June 11, 2024
  • Nevada - Bulletin 24-001 - February 23, 2024
  • New Hampshire - Bulletin Docket #INS 24-011-AB - February 20, 2024
  • New Jersey - Insurance Bulletin No. 25-03 - February 11, 2025
  • New York - Insurance Circular Letter No. 7 - July 11, 2024
  • North Carolina - Bulletin No. 24-B-19 - December 18, 2024
  • Oklahoma - Bulletin No. 2024-11 - November 14, 2024
  • Pennsylvania - Insurance Notice 2024-04, 54 Pa.B. 1910 - April 6, 2024
  • Rhode Island - Insurance Bulletin No. 2024-03 - March 15, 2024
  • Texas - Bulletin # B-0036-20 - September 30, 2020
  • Vermont - Insurance Bulletin No. 229 - March 12, 2024
  • Virginia - Administrative Letter 2024-01 - July 22, 2024
  • Washington - Technical Assistance Advisory 2024-02 - April 22, 2024
  • West Virginia - Insurance Bulletin No. 24-06 - August 9, 2024
  • Wisconsin - Insurance Bulletin - March 18, 2025

Bottom Line

AI can scale decisions, but insurers must make those decisions explainable, fair and defensible. Build documentation, testing and human oversight into your systems now - and be ready to show your work.

If your teams need skills in explainability, risk testing and AI governance, consider targeted upskilling resources at Complete AI Training.