Florida lawmakers probe AI in claims and pricing: what insurers need to do now
Florida lawmakers are scrutinizing how carriers apply AI in claims handling and pricing. Expect questions on fairness, transparency, and whether automated decisions create hidden bias or deny policyholders due process.
If you touch rating, underwriting, SIU, claims triage, or image-based estimating, this is your signal to tighten controls. The bar is moving from "innovation" to "explain your model, show your data, prove it's fair."
Where scrutiny will likely land
- Data sources: Third-party data, unverified proxies, and how you justify their predictive value.
- Model risk governance: Documentation, approvals, versioning, and who owns outcomes.
- Bias and disparate impact: Tests across protected classes and legitimate business-need defenses.
- Consumer communication: Clarity of notices, appeal paths, and human review options.
- Vendor oversight: Rights to audit, transparency on features, and incident reporting.
- Claims automation: Image estimating accuracy, leakage, and reinspection protocols.
- Pricing and rating: Feature explainability, rate filing alignment, and alignment with actuarial standards.
Immediate actions for Florida carriers, MGAs, and TPAs
- Inventory AI systems: List every model used in rating, underwriting, claims, SIU, and CX. Include vendor tools.
- Assign ownership: Name a model owner, business sponsor, and compliance partner for each system.
- Create model cards: Purpose, inputs, training data, performance, limits, monitoring, and retraining triggers.
- Run fairness tests: Evaluate outcomes by protected characteristics or validated proxies. Document methods and thresholds.
- Set human-in-the-loop points: Define when people intervene, override, or escalate cases.
- Tighten audit trails: Log data inputs, version IDs, and decision rationales for every automated decision.
- Refresh consumer notices: Explain factors that materially influence outcomes and how to appeal for human review.
- Validate vendors: Require performance, bias, and security attestations; include right-to-audit clauses.
Claims: practical controls that hold up under scrutiny
- Image-based estimating: Benchmark against human appraisals, flag low-confidence estimates, and re-inspect a sample.
- Severity and triage models: Monitor for underpayment patterns by region, vehicle type, or contractor network.
- Fraud scoring: Guard against redlining proxies; separate investigation triggers from payment decisions.
- Cycle-time vs. fairness: Track if speed gains correlate with higher dispute rates or complaints.
Pricing and rating: controls that regulators expect
- Feature governance: Map every rating factor to filed rules and actuarial support. Avoid opaque composites.
- Model explainability: Require local explanations for each quote; store them with the transaction.
- Retraining discipline: Pre-approve retraining data, shift detection thresholds, and rollback plans.
- Disparate impact testing: Test approval rates, premiums, and surcharges across groups and legitimate proxies.
- Adverse action clarity: If a factor affects price or eligibility, disclose it plainly and offer a path to correction.
Documentation that makes hearings easier
- One-page model summary per system with purpose, benefits, risks, and controls.
- Policy library for AI use, data governance, access controls, and incident response.
- Testing pack with fairness, performance, stability, and backtesting results, refreshed quarterly.
- Issue register tracking model incidents, customer complaints, and corrective actions.
Vendor management: no black boxes
- Mandate transparency on key features and data lineage, even if weights remain proprietary.
- Require documented bias testing, monitoring SLAs, and immediate notification of material model changes.
- Secure export of decision explanations for filings, audits, and consumer disputes.
What to watch next
- Committee hearings and data calls focused on claims automation, price fairness, and vendor reliance.
- Potential filing requirements for AI-impacted rating factors and claims tools.
- Alignment with NAIC guidance and emerging state rules on unfair discrimination from algorithms.
Helpful references
Upskill your team
If your pricing, claims, or compliance teams need practical AI training, consider structured courses and certifications built for operators.
The takeaway: use AI, but make it auditable, explainable, and fair. If you can show your data, justify your factors, and prove ongoing oversight, you'll be ready for questions-and you'll operate with fewer surprises.