Florida Signals Tighter Oversight of AI in Insurance
Tallahassee is paying attention. As lawmakers study artificial intelligence, Florida Insurance Commissioner Michael Yaworsky put it plainly: "Responsible AI governance is crucial." He added he's not against AI, but it must be "responsibly deployed."
For insurance leaders, the message is simple: regulators want visibility, accountability, and proof that AI isn't creating unfair outcomes. If your teams use models for underwriting, rating, claims, SIU, or customer service, now is the time to tighten controls.
What this means for carriers, MGAs, and agencies
- Expect exam questions on where and how AI is used across the lifecycle.
- Be prepared to show how you test models for accuracy, bias, and explainability.
- Vendor tools won't shield you. You'll still need governance and documentation on your side.
- Consumer protections matter: clear reasons for decisions, fast appeal paths, and human review for edge cases.
Practical steps you can implement now
- Create an AI inventory: every model, purpose, data sources, owners, and affected products/lines.
- Assign accountable owners: business, model risk, compliance, IT/security. Make decisions traceable.
- Document policies: model development, validation, monitoring, change control, and decommissioning.
- Test for fairness: segment outcomes by protected classes and relevant proxies; log methods and results.
- Strengthen data controls: data lineage, quality checks, consent tracking, and limits on external data.
- Set thresholds and alerts: drift, error rates, complaint spikes, and adverse action trends.
- Require human-in-the-loop for high-impact decisions (declines, rescissions, large claim denials).
- Tighten vendor oversight: ask for model cards, validation summaries, training data details, and SOC reports.
- Clarify consumer notices: specific reasons for decisions and an easy path to request human review.
- Build an exam-ready file: policies, inventories, validation reports, change logs, and issue remediation.
What Florida regulators may ask you to prove
- Where AI is used and for what decision types.
- How you validate models before and after deployment.
- How you detect and fix bias or drift.
- How consumers are informed and can appeal decisions.
- How you oversee third-party models and data.
Why this trend isn't isolated
Regulatory focus on AI is growing across the industry. Principles and frameworks already exist to guide you. If you align with them now, you'll be ahead of state-level expectations and future exams.
- NIST AI Risk Management Framework - practical guidance on governance, risk, and controls.
- NAIC Principles on AI - fairness, accountability, compliance, and transparency expectations.
If your teams need structured upskilling
Get your underwriting, claims, and compliance leaders on the same page with focused training and tools. Start with role-based options here: AI courses by job.
The bottom line: Florida wants responsible deployment, not a freeze on innovation. Build governance that can stand up to questions, and you'll protect consumers, reduce model risk, and keep your AI programs moving forward.
Your membership also unlocks: