Florida to insurers: use AI responsibly, keep humans in the loop

Florida's insurance chief says use AI, but prove it's explainable, disclosed, and human-overseen. Bills would require people to make denial decisions and keep regulators informed.

Categorized in: AI News Insurance
Published on: Nov 27, 2025
Florida to insurers: use AI responsibly, keep humans in the loop

Florida insurance regulator to carriers: Use AI, but prove it's responsible

Florida's top insurance regulator is pushing for clear guardrails on artificial intelligence across underwriting, rating, and claims. The message: deploy AI, but ensure disclosure, auditability, and a qualified human in the loop.

"Responsible AI governance is crucial," Insurance Commissioner Michael Yaworsky told senators. He emphasized that regulators aren't banning AI, but they do want visibility into how it's used and who is accountable for outcomes.

What lawmakers are considering

Two companion bills seek to keep humans in charge of adverse decisions. Rep. Hillary Cassel filed HB 527, mirroring SB 202 by Sen. Jennifer Bradley, to ensure humans make decisions about denials of insurance claims.

House leadership also scheduled "Artificial Intelligence Week" for Dec. 8-12, signaling more scrutiny across sectors, including insurance. The tone is opportunity with caution: innovation is welcome, but blind spots will get attention.

Regulator concerns you should treat as action items

Yaworsky cited a recent filing where a carrier used an off-the-shelf AI and couldn't explain how it worked. That's a red flag regulators won't ignore.

If you use vendors or third-party models, expect questions about explainability, oversight, and documentation. "If a practice is prohibited for a human to do on behalf of an insurance company, it is prohibited for AI to do," industry leaders told lawmakers. No end runs around existing law.

Immediate steps for carriers and MGAs

  • Inventory your AI systems: Where AI touches pricing, underwriting, claims triage/settlement, SIU, marketing, and customer service.
  • Disclose usage: Prepare consumer-facing and regulator-facing disclosures that are clear, specific, and consistent with filings.
  • Prove human oversight: Identify accountable owners. Document who can override the system and when.
  • Explainability on demand: Maintain model summaries ("model cards"), variable lists, limitations, and known failure modes.
  • Bias and fairness testing: Run pre-deployment and ongoing tests for unfair discrimination; document methodology and remediation plans.
  • Data governance: Validate data sources, consent, accuracy, lineage, and retention. Lock down PII and health data.
  • Vendor risk management: Contract for audit rights, change-management notifications, incident reporting, and compliance attestations.
  • Audit trails: Log inputs, decisions, overrides, and outcomes to support market conduct exams and consumer complaints.
  • Claims safeguards: If AI recommends denial or reduction, require human review and clear appeal channels.
  • Filing readiness: For any rating or underwriting impact, ensure your rate/rule filings reflect variables, logic, and controls.

Operational focus by function

  • Underwriting/Rating: Validate variables against filed/approved rules. Monitor drift. Run holdout testing before pushing to production.
  • Claims: Use AI for triage, FNOL extraction, and fraud signals-but keep humans on adverse decisions. Track false positives.
  • Customer communications: If generative tools draft letters or emails, require templates, human review, and compliance checks.
  • SIU/Fraud: Treat models as leads, not verdicts. Document thresholds and investigate procedures.

Governance that will stand up in Tallahassee

  • Policy and register: Adopt an AI policy and maintain a registry of all AI systems with risk ratings.
  • RACI clarity: Assign accountable executives, model owners, and validators. Schedule periodic reviews.
  • Change control: Require pre-deployment review for model updates and vendor version changes.
  • Consumer impact checks: Pre-release testing for disparate impact and clear adverse action language.
  • Incident response: Define triggers for regulator notification if AI causes material errors or consumer harm.

What to watch next

Expect movement on disclosure and human-in-the-loop requirements for claims. Committees may also push for explicit documentation standards and regulator access to model information.

Practical takeaway: treat AI like any other regulated process-file it when it affects rating or underwriting, document it when it touches consumers, and don't deploy anything you can't explain.

Helpful references


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →