Florida Lawmakers Press Insurers on AI: Productivity Claims Meet Calls for Guardrails
TAMPA, Fla. - A Florida House subcommittee met with insurance executives to assess how artificial intelligence is being used across underwriting and claims. Executives argued AI is boosting productivity and accuracy. Lawmakers pushed back, questioning the risks of depending on automated systems without clear limits.
What Executives Say AI Is Doing Well
Industry representatives said they're moving past incremental tweaks and rethinking core processes with generative models. The pitch: faster analysis, better triage, and fewer manual errors.
One executive cited expanded ability to evaluate weather patterns, satellite imagery, and other exposures to inform writing and pricing decisions. Another emphasized discipline in deployment-assuring lawmakers they are not "turning everything over to Google" and are coordinating closely with internal IT.
The Lawmaker Challenge: Can AI Deny Claims on Its Own?
During questioning, the subcommittee's vice chair from South Florida asked a direct question: what Florida statute prevents an insurer from using AI as the sole basis for denying a claim-across health, property, or other lines?
Executives signaled caution and intent to keep humans involved, but the exchange highlighted an unresolved issue: the need for explicit standards on model governance, explainability, and human review-especially for adverse decisions.
Practical Use Cases You Can Operationalize Now
- Underwriting: intake classification, risk scoring, document summarization, aerial/satellite-assisted property assessments.
- Claims: first notice intake, document extraction, medical bill review support, fraud flags, severity and leakage analytics, automated reserving suggestions.
- Cat management: event forecasting inputs, portfolio aggregation checks, automated exposure rollups to support reinsurance discussions.
- Customer ops: guided self-service, agent/adjuster copilots, knowledge retrieval from policy forms and endorsements.
Risk Controls Lawmakers Expect to See
- Human-in-the-loop for any adverse action (denials, cancellations, rescissions, price increases beyond threshold).
- Model inventory and lifecycle governance (purpose, owner, data sources, validation cadence, retirement criteria).
- Documented explainability: how a decision was made, what data influenced it, and the reviewer's sign-off.
- Bias and fairness testing with clear metrics, thresholds, and remediation steps.
- Third-party/vendor oversight: contractual rights to audit models, data lineage and retraining logs.
- Data controls: PII minimization, access logging, prompt/content filtering, red-teaming for prompt injection and data leakage.
- Consumer disclosures and appeal pathways that are simple and fast.
- Audit readiness: evidence trails for regulators and internal audit (features used, model versions, overrides).
Regulatory Watchlist
Expect closer scrutiny on AI-based underwriting and claims-especially where decisions materially affect consumers. For context and frameworks your teams can use today, review:
Action Plan for Insurance Leaders
- Stand up an AI governance committee spanning actuarial, claims, IT, legal, compliance, SIU, and product.
- Create decision tiers: which actions AI can recommend vs. decide, and where human validation is mandatory.
- Adopt standardized model risk documentation (purpose, datasets, performance, drift, limits, monitoring).
- Run bias testing by segment (ZIP, age band, protected classes as permitted) and store results with corrective actions.
- Pilot explainability tooling and require reason codes that are understandable to a consumer.
- Include AI clauses in vendor contracts: training data rights, security, incident reporting, and audit access.
- Train frontline staff and adjusters on AI-assisted workflows, escalation, and how to override with judgment.
- Implement kill-switches and fallback procedures if a model degrades or fails.
What This Means for Your Book
Productivity gains are real, but they won't protect you from regulatory risk or reputational damage if adverse decisions lack human review and clear explanations. The carriers that win will pair speed with accountability: precise data, transparent models, and documented oversight.
Build Team Capability
If you're upskilling underwriting, claims, or compliance teams on applied AI, these resources can help:
Bottom line: keep AI in the loop-just not in charge. Document everything, keep a human on the hook for adverse outcomes, and be ready to show your work.
Enjoy Ad-Free Experience
Your membership also unlocks: