Florida's AI-In-Insurance Hearing: What Carriers and Claims Leaders Need to Do Now
Florida lawmakers pressed insurance and tech officials on how artificial intelligence is used in claims handling and whether it can be misused. Industry representatives said AI is a tool under existing insurance laws and isn't a shortcut around consumer protections.
"If a practice is prohibited for a human to do on behalf of an insurance company, it is prohibited for AI to do," said Paul Martin of the National Association of Mutual Insurance Companies. Lawmakers explored whether AI can deny claims without human involvement, and what Florida law-if any-prevents that in property or health insurance.
Key points from the hearing
- Existing statutes still govern AI use. Martin emphasized AI is not an end run around state statutes or regulations.
- Rep. Hillary Cassel asked what Florida law explicitly bars AI from being the sole basis of a claim denial. That gap was a focal point.
- Thomas Koval (FCCI Insurance Group/Florida Insurance Council) said AI has been used for years but is more advanced now-and won't be used as a blanket decision-maker. "It's just not like we're going to turn everything onto Google and ask, 'Should we pay this claim?'"
- In 2025, bills to prevent AI-only denials failed, but the issue is likely back in the 2026 session. Committee leadership called it an ongoing learning process.
- Lawmakers are also reviewing AI in education. TechNet urged lawmakers to only create new AI-specific authorities where there is a clear, unique risk.
- Property insurance pressures were front and center. Rep. Marie Woodson cited high premiums; Koval said AI-driven efficiency can lower operating costs, which feed into rates.
- Rep. Nathan Boyles warned about micro-segmentation shutting out some risks. Martin countered: "AI is not there to deny claims. AI is not there to write fewer policies."
Operational implications for insurers
Florida may not have codified AI-specific claims limits yet, but the direction is clear: human accountability, explainability, and auditability. If you rely on AI for claims triage, fraud flags, or document extraction, shore up controls now.
- Require a human-in-the-loop for any adverse decision. No final denials, rescissions, or coverage determinations should be fully automated.
- Document decision logic. Keep model inventories, inputs, versions, and claim-level rationales in the file. Preserve audit trails.
- Test for unfair discrimination. Run pre-deployment and periodic impact testing. Remove prohibited variables and watch for proxies.
- Tighten vendor oversight. Get attestations on data sources, performance, bias testing, and explainability. Reserve audit and termination rights.
- Define use boundaries. AI can triage severity, flag potential fraud, and accelerate document handling. Humans should handle coverage, liability, and settlement authority.
- Upgrade consumer notices. Explain how AI assists your process, provide clear appeal paths, and capture human reviews in writing.
- Align with filings and compliance. If models affect rating or underwriting in Florida, be prepared to describe them to regulators.
- Train your people. Claims, SIU, compliance, and product teams need shared language and guardrails for AI-supported workflows.
Regulatory signals to watch
- 2026 Florida bills that could bar AI-only denials across lines of business.
- Guidance tying AI-supported activities to unfair claims settlement and unfair discrimination standards.
- The NAIC's AI governance direction, which many states are referencing for expectations around testing and oversight. See NAIC resources on AI governance here.
- Colorado's rules on algorithmic discrimination and model governance, a common reference point for state action. Overview here.
Action checklist for the next 90 days
- Map where AI touches decisions in claims, underwriting, pricing, marketing, and fraud.
- Stand up an AI governance group with compliance, legal, actuarial, claims, and IT.
- Adopt written policies for human review, explanations, recordkeeping, and model change control.
- Run bias and performance tests on models that influence consumer outcomes, and remediate gaps.
- Update consumer disclosures and adverse action processes to reflect human review.
- Evaluate vendor contracts and require transparency on model behavior and data lineage.
If you need to level up team skills
If your claims, SIU, or compliance teams need practical AI fluency, a focused curriculum helps. Explore role-based options here.
Bottom line: Florida is signaling more scrutiny on AI in claims. Build controls now so you can defend decisions, protect consumers, and keep speed gains without regulatory risk.