State regulators move to implement NAIC AI bulletin as adoption spreads across US

Half of U.S. states have adopted an AI bulletin from insurance regulators, but enforcing it remains unsettled. The guidance carries no legal authority, leaving officials to build compliance tools from scratch.

Categorized in: AI News Insurance
Published on: Apr 25, 2026
State regulators move to implement NAIC AI bulletin as adoption spreads across US

State regulators split on how to enforce AI guardrails for insurers

About half of U.S. states have adopted or partially adopted a model bulletin on artificial intelligence, but regulators are now grappling with how to actually enforce it. The National Association of Insurance Commissioners released the guidance in December 2023, and state officials are moving into implementation phase.

The bulletin itself carries no legal authority. It defines key AI terms while deliberately avoiding formal definitions for bias or harm, instead focusing on preventing "adverse consumer outcomes" that violate existing insurance standards.

Dorothy L. Andrews, senior behavioral data scientist and actuary at the NAIC Research and Actuarial Department, said regulators are now working to operationalize the bulletin. "The main goal is really to start a discussion on how regulators might operationalize the bulletin now that it's adopted," Andrews said. "This will be a high-level discussion on the key aspects of an insurer's operation that can be measured as being in compliance with the bulletin."

Model cards and risk taxonomy

Regulators are developing a four-level risk taxonomy to classify AI systems from low to unacceptable risk. They're also proposing standardized reporting tools called "model cards" that would function like nutrition labels, showing how AI systems are built, what data they use, and what risks they pose.

Model drift emerged as a primary concern. If an AI model no longer fits the problem it was designed to solve, consumers face potential harm. Insurers should document their testing methods and validation procedures.

Data quality is central to the oversight framework. Insurance data itself can reflect inherent biases - such as underrepresentation of uninsured populations - while third-party data sources may be misapplied beyond their original purpose.

Regulators noted that mathematical analysis alone won't catch these problems. "For example, it would be difficult to determine whether more speeding tickets were written in some communities versus others because of over policing," one regulator explained. "Only a socio-technical analysis would uncover that."

Understanding how data is used requires examining the systems and institutions that generate it, not just the numbers themselves. This approach to AI Data Analysis goes beyond standard technical review.

Accountability gaps in automation

Consumer advocates flagged a critical vulnerability: overreliance on automation can create accountability gaps if human expertise is lost from the workflow. They called for clear escalation processes when AI systems fail or produce questionable results.

Eric Ellsworth, director of health data strategy for Consumers' Checkbook/Center for the Study of Services, emphasized the need for "well-defined exception handling." When an AI model reaches its limits, control should transfer back to a human reviewer with a clear workflow.

"Those kinds of issues are critical for making sure that consumers can get issues resolved, because otherwise you have nobody's home problem," Ellsworth said.

Implementation challenges

Regulators and insurers both acknowledged that enforcing the bulletin will require additional staffing, training, and coordination with existing oversight systems.

Industry representatives also stressed the importance of data confidentiality when insurers share information with regulators outside formal examinations. State regulators participating in pilot programs have assured insurers that collected information will remain confidential under established regulatory authority.

Michael Humphreys, Pennsylvania insurance commissioner, described the approach: "We have a series of questions really designed to elicit the use and scope of AI of the company, and then, based on initial answers, may go deeper to get the best understanding of how that company is using AI."

For insurance professionals, understanding AI for Insurance and how regulators are approaching oversight will be essential as these frameworks take shape across states.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)