Florida seeks guardrails on insurance AI to keep humans in charge

Florida moves to police insurers' AI, pushing disclosure, audits, and a human on the hook. If you can't explain the tool, don't use it-carriers should inventory and document now.

Categorized in: AI News Insurance
Published on: Dec 02, 2025
Florida seeks guardrails on insurance AI to keep humans in charge

Florida readies guardrails for AI in insurance: What carriers and MGAs need to do now

Florida lawmakers are moving to put clear oversight on how insurers use AI. State Insurance Commissioner Michael Yaworsky urged the Senate Banking and Insurance Committee to require disclosure, auditing, and a human-in-the-loop for any AI systems used by carriers.

"Responsible AI governance is crucial," Yaworsky said. He emphasized regulators aren't trying to ban AI, but they want assurance it's used responsibly and is understandable to both companies and regulators.

Key legislative moves

Rep. Hillary Cassel, R-Dania Beach, filed HB 527 to ensure humans - not algorithms - make final decisions on claim denials. Sen. Jennifer Bradley, R-Fleming Island, filed an identical bill (SB 202).

Yaworsky stopped short of backing a blanket human-decision requirement across the board. He outlined a framework focused on disclosure of AI use, auditable systems, and a qualified human who understands and oversees the tool's decisions.

Why this matters

Lawmakers are escalating their focus on AI across sectors, with the Florida House declaring Dec. 8-12 as "Artificial Intelligence Week" to examine impacts across committees. Leaders recognize the upside of AI, but they're wary of misuse, opaque systems, and unintended harm - especially in claims and pricing.

Industry groups say existing insurance laws already govern AI. As one panelist put it, if a human can't legally do it, neither can AI. Expect regulators to test that claim in practice.

Regulatory signal you shouldn't ignore

Florida regulators recently flagged a filing that relied on an off-the-shelf AI product. When asked what the tool actually did, the company's response was: "We don't know."

That is a bright-line example of what will not fly. If you can't explain it, you can't use it - at least not in rate, rule, form, claims, or consumer-facing decisions.

Action checklist for carriers, MGAs, TPAs, and vendors

  • Inventory all AI/ML use. Claims triage, SIU, adjuster assist, document intake, underwriting, pricing, fraud scoring, chatbots - catalog it all. Note vendors, models, data sources, and decision points.
  • Disclose AI use in filings. Be explicit in rate/rule/form filings and responses. Assume OIR will ask for technical and governance detail.
  • Keep a human in the loop with real authority. Define who reviews and can override AI outputs. Document qualifications and procedures.
  • Make it auditable. Maintain versioned model documentation, feature lists, training data lineage, prompt libraries (for LLMs), and decision logs.
  • Run bias and fairness testing. For underwriting/claims, test for prohibited factors and proxies. Document methods, thresholds, and remediation steps.
  • Require vendor transparency. No "black box" answers. Bake disclosure, testing rights, and incident reporting into contracts and NDAs.
  • Explainability standards. Use tools that provide feature importance, rationale, and case-level explanations suitable for regulators and consumers.
  • Consumer notices. Where appropriate, tell customers when AI assists in decisions, and how to appeal or reach a human.
  • Controls for LLMs and automation. Prevent unauthorized data ingestion, set guardrails for prompts, and log AI-assisted adjustments and communications.
  • Governance and escalation. Stand up an AI risk committee, assign model owners, and create playbooks for outages, drift, or harmful outputs.

What to expect from regulators

Focus will land on four areas: transparency in filings, human oversight, repeatable audits, and consumer protection. If your AI changes rates, claim outcomes, or consumer interactions, assume scrutiny.

Documentation will be the make-or-break. Be ready to show what the system does, why it's appropriate, how it's monitored, and how a human can correct it.

Standards worth tracking

Two frameworks align with what Florida is signaling: the NAIC's AI principles for insurers and NIST's AI Risk Management Framework. Both emphasize accountability, transparency, and testing.

Bottom line for insurance leaders

Florida isn't banning AI. It's demanding control, clarity, and accountability. If your team can explain the system, audit it, and put a qualified human on the hook for outcomes, you're on the right track.

If you can't do those things today, pause the use case, strengthen governance, and restart with documented controls. It's much cheaper than a rejected filing, consumer harm, or an enforcement action.

Want your team fluent in responsible AI?

If you're building out training for underwriting, claims, or product teams, these curated programs can help you stand up governance and practical skills fast: Courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide