Michigan DIFS Sets AI Guardrails for Insurers and Financial Firms: Written Programs, Board Oversight, and Real Accountability

Michigan DIFS now requires insurers using AI to keep a written AIS Program with governance, controls, and oversight. Expect fairness tests, explainability, and consumer safeguards.

Categorized in: AI News Insurance
Published on: Jan 21, 2026
Michigan DIFS Sets AI Guardrails for Insurers and Financial Firms: Written Programs, Board Oversight, and Real Accountability

Michigan DIFS sets formal AI governance expectations for insurers

Michigan's Department of Insurance and Financial Services (DIFS) has issued guidance that puts clear guardrails around the use of artificial intelligence across financial services. The directive: if you use AI, you need a written AI Systems Program (AIS Program) with real governance, real controls, and real oversight.

The bulletin welcomes AI's upside-product innovation, better consumer interfaces, automation, and efficiency-while drawing a firm line on risk. Inaccuracy, unfair discrimination, data exposure, weak transparency, and opaque decision logic are priority concerns. The focus isn't on specific models or tools. It's on accountability for outcomes.

What DIFS expects

  • A written AIS Program that's designed, implemented, and maintained-not a slide deck.
  • Board and executive oversight with clear lines of accountability and decision rights.
  • Governance, risk management, and internal audit embedded in core processes, not off to the side.
  • Model and tool inventory covering development, procurement, and use across underwriting, pricing, claims, and customer interaction.
  • Data controls addressing quality, sources, privacy, security, retention, and access.
  • Fairness and nondiscrimination testing with documented methodologies and thresholds.
  • Explainability commensurate with impact-what the model used, why it made a decision, and how it can be challenged.
  • Human oversight for high-impact use cases; no "set it and forget it."
  • Monitoring and change management with triggers, retraining protocols, and rollback plans.
  • Third-party and vendor risk management, including contract clauses, testing rights, and ongoing assurance.
  • Consumer protection measures: disclosures, adverse action notices, complaint handling, and remediation.
  • Documentation and records sufficient for supervision, including decisions, tests, and outcomes.

"This new bulletin outlines DIFS' expectations and identifies issues financial services providers should consider when using AI systems," said DIFS Director Anita Fox. She emphasized that AI use must comply with federal and state law while keeping consumer protection front and center. Automation does not dilute regulatory responsibility.

Why this matters for insurance operations

AI is moving deeper into core insurance workflows. That raises both upside and scrutiny. The message from regulators is consistent: AI risk is a board-level issue and must sit inside standard governance, not in experimental silos.

  • Underwriting and pricing: verify the legality and fairness of rating factors, proxies, and data enrichment.
  • Claims: monitor triage, fraud scoring, and settlement tools for accuracy, bias, and explainability.
  • Customer interaction: ensure chatbots and decision aids provide accurate information and don't mislead.
  • Vendor models: you own the outcomes, even when a third party supplies the technology.

Stand up your AIS Program this quarter

  • Appoint an accountable executive and form a cross-functional AI governance committee (risk, legal/compliance, actuarial/data science, IT, audit, business owners).
  • Build a living inventory of AI and advanced analytics tools, including purpose, data inputs, outputs, owners, and risk tiering.
  • Define your AI risk taxonomy and control standards (fairness, accuracy, privacy, security, explainability, resilience, consumer impact).
  • Establish testing protocols: pre-deployment validation, fairness/accuracy thresholds, stress testing, and periodic revalidation.
  • Integrate vendor oversight: due diligence, contractual obligations, transparency requirements, and monitoring.
  • Document consumer-facing practices: disclosures, adverse action logic, complaint routing, and remediation playbooks.
  • Embed change management: approval gates, model versioning, audit trails, and rollback criteria.
  • Train teams (underwriting, claims, product, CX) on policy, controls, and their roles in oversight.
  • Set an internal audit plan to test design and operating effectiveness within the first year.

For source material and supervisory context, see the Michigan DIFS website and the NAIC Principles on Artificial Intelligence.

If you're upskilling teams to meet these expectations, explore role-based options at Complete AI Training.

Bottom line

AI can improve insurance operations, but it now comes with formal governance strings attached in Michigan. Put a credible AIS Program in place, own the outcomes, and keep consumer protection at the center of every deployment.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide