AI regulation in insurance: a practical guide for multi-state compliance
AI is now embedded across claims, pricing, marketing, and research. The upside is obvious. The risk is regulatory sprawl that can drain time, budget, and focus if you're not ready.
With no uniform federal standard, states are writing their own rules for AI and data privacy. Twenty-four states have laws in place, and more are queued up as sessions resume in January. If you operate across state lines, you need clear guardrails for both your models and your data.
What varies by state
The NAIC issued model guidance in 2023 that many states used as a template. Themes are consistent: audits, transparent governance, risk controls, and vendor oversight. The details, however, are different enough to matter in day-to-day operations.
Privacy rules that change your playbook
- Universal opt-out tools: required in California, Colorado, Connecticut, Maryland, and Minnesota. Tennessee has no such mandate.
- Teens and profiling: New Jersey requires parental consent to process data from ages 13-17 for targeted ads or profiling.
- Sensitive data: Maryland goes further-processing must be strictly necessary for the service, and selling that data is off-limits. That bar is higher than Colorado's adequacy test and California's reasonableness standard.
These variations affect consent flows, segmentation, and how you treat signals used in models. Your systems need to recognize and enforce these rules by user location, data type, and purpose-automatically.
AI decisions that affect consumers
Colorado's Artificial Intelligence Act sets heavy expectations for "high-risk" systems used in decisions that impact people. Organizations must show their models are not discriminatory, which often means testing with PII and triggering more privacy obligations.
Expect retention and transparency duties as well. Many states require you to archive data, models, tests, and validation artifacts. Colorado gives consumers rights to understand profiling decisions, see how to get a different outcome, review the data used, correct it, and request a re-evaluation based on corrections. Build these pathways up front, not as a bolt-on.
Read Colorado SB24-205 for scope and rights language.
Different thresholds, different triggers
Compliance can flip "on" based on customer counts and revenue mix, which vary by state. Maryland applies its requirements to companies serving 35,000 customers that earn over half of revenue from selling personal information. Other thresholds: Montana at 25,000, Minnesota at 100,000, and Tennessee at 175,000.
Track these metrics per state, not just in aggregate. A quarterly dashboard that ties customers, revenue sources, and processing purposes to each jurisdiction will save you from surprise obligations.
Build governance that actually works
Manual tagging and ad-hoc reviews won't scale across states. You need automated discovery and policy enforcement that follows the data through your AI workflows-inputs, features, model outputs, and where processing happens.
Key capabilities: classify data on ingest, apply sensitivity labels, and log lineage so you can show how information moves and changes. Your platform should enforce different rules based on data type, user location, and use case, while generating audit trails and impact assessments on demand.
What to implement now
- State rules map: one source of truth tying each requirement to concrete controls, owners, and evidence.
- Automated data discovery: continuous classification, sensitive data detection, and lineage tracking for model inputs and outputs.
- Purpose-based access: policies that check "who, where, why" before data is used in training, testing, or production.
- Model registry and evidence store: archive datasets, code, parameters, tests, bias checks, and approvals-all time-stamped.
- Consumer rights workflows: intake, identity verification, data review, correction, re-decision, and response SLAs.
- Vendor oversight: require attestations, testing artifacts, and incident reporting; validate that opt-outs and deletion requests propagate.
- Threshold monitoring: per-state customer counts and revenue mix with alerts as you near trigger points.
- Continuous monitoring: alerts for policy violations, drift, proxy bias, and out-of-scope uses, plus periodic impact assessments.
Operating principles
- Build for variance: assume each state will tweak definitions, thresholds, and rights. Parameterize policies so changes don't break operations.
- Prove it or it didn't happen: if you can't produce artifacts fast-data lineage, test results, decisions-you're exposed.
- Start with consumer rights: design explanations, recourse, and corrections first; it simplifies model and data design.
Skill up your team
Your compliance, data, and product teams need a shared playbook for AI governance. If you're building capability in-house, curated training can accelerate rollout and reduce rework.
Browse AI courses by job role to upskill teams supporting compliance, data science, and product.
Bottom line
State-by-state rules will keep shifting. If your controls are automated, evidence-ready, and purpose-aware, you can move fast without risking fines or trust. Build the foundation once, then adapt as the rules change.
Your membership also unlocks: