Florida lawmakers scrutinize AI in insurance decisions
Florida's House Subcommittee on Banking and Insurance called in industry executives on Oct. 7 to explain how artificial intelligence is being used across underwriting, pricing, claims, customer service, and risk management. The core question: how far automation goes, and where human judgment must stay in control.
Industry representatives said AI is improving speed and accuracy across functions. Lawmakers pressed on consumer impact. One question cut to the point: what law prevents an insurer from using AI as the sole basis for denying a claim? Executives stressed human oversight and judgment, noting they're not outsourcing decisions to a generic web tool.
What this means for insurers operating in Florida
- Expect deeper scrutiny of models that influence premiums, underwriting decisions, claim triage, SIU referrals, nonrenewals, and pricing segmentation.
- Be ready to demonstrate human-in-the-loop controls for adverse actions and claims denials, plus a clear appeal path.
- Maintain documentation: data lineage, feature governance, thresholds, model versions, validation results, and monitoring plans.
- Test for unfair bias and proxy effects; document methods, thresholds, remediation steps, and re-tests.
- Create explanation artifacts to support consumer notices and regulator inquiries (plain-language reason codes, input factors, and how they influenced outcomes).
- Strengthen third-party risk management: data and model vendor due diligence, performance SLAs, and audit rights.
- Keep decision and access logs with retention aligned to claims, underwriting, and market conduct timelines.
Signals from the hearing: likely guardrails
- No denials or adverse actions based solely on automated outputs without human review and a documented appeal option.
- Disclosure when automated tools influence a decision, including a meaningful explanation on request.
- Limits on inputs that serve as proxies for protected traits; enhanced market conduct exams focused on AI use.
- Possible attestations or inventories of AI systems submitted to the Office of Insurance Regulation.
Action checklist for Q4-Q1
- Inventory every AI/ML system across the policy lifecycle; rate each by consumer impact and model risk.
- Set up cross-functional AI governance (actuarial, underwriting, claims, compliance, legal, IT, SIU) with clear RACI.
- Run a gap assessment against the NIST AI Risk Management Framework; prioritize controls for high-impact use cases. NIST AI RMF
- Define human override and escalation procedures; train adjusters and underwriters on when and how to intervene.
- Update adverse action and claim denial templates with specific, understandable reason statements tied to inputs.
- Exercise surge scenarios (e.g., catastrophe events) with model fallback plans and manual runbooks.
- Brief the board; include AI risk in ORSA and ERM artifacts; align with OIR expectations. Florida OIR
What to watch next
These committee sessions were fact-finding. Expect draft bills that target transparency, human review, vendor oversight, and recordkeeping. Carriers and MGAs should prepare to show their work: how models are governed, how fairness is tested, and how customers can get a human review.
Bottom line: Treat AI like any high-impact model: documented, monitored, explainable, and interruptible by a qualified human. If you can show that, you'll be ready for both market conduct exams and new statutory requirements.
If your teams need practical upskilling on oversight and workflow automation, explore role-based AI training options: Courses by job.