Florida House panel backs human sign-off on AI-influenced claim denials
Florida lawmakers took a clear position on AI in claims: humans must make the final call when a denial or payment reduction is on the table. The House Insurance & Banking Subcommittee unanimously advanced HB 527, sending a signal to carriers that "human in the loop" won't be optional if this becomes law.
Insurer trade groups pushed back, warning about slower cycle times and duplicative regulation. Hospitals and physicians backed the bill, reflecting long-running friction over claim adjudication and payment integrity tools.
What HB 527 would require
- Human decision-maker: Any claim denial, partial denial, or payment reduction must be decided by a qualified human professional.
- AI still allowed: Carriers can use AI and algorithms for intake, triage, and recommendations. The human owns the final decision.
- Regulatory transparency: If AI or algorithms are used in claims handling, carriers must document details in manuals available to insurance regulators.
Regulatory backdrop
The proposal landed during the Florida House's "AI Week," with committees reviewing technology issues across sectors. Bill sponsor Rep. Hillary Cassel said no Floridian should have a claim denied solely by an automated output, calling the measure a "clear and reasonable safeguard."
Florida's Insurance Commissioner Michael Yaworsky told senators he supports clearer oversight-disclosure, audits, and proof that a human with expertise understands the system-without eliminating AI outright. "This is a policy decision for the Legislature," he said, emphasizing responsible, regulator-visible use of AI.
There was also discussion of federal activity around AI policy. Cassel noted that insurance is regulated at the state level under long-standing federal law, and said a federal executive action would not override Florida's authority over insurance regulation.
Why this matters for insurers
- Cycle time vs. control: Human sign-off adds friction at the denial stage. Expect new SLAs, staffing models, and escalation thresholds.
- Documentation burden: Manuals must now describe where and how AI is used in claims handling. Assume regulator reviews and potential audits.
- Appeals and litigation: Clear human rationale will carry more weight than model scores. Train reviewers to write defensible, plain-language explanations.
- Vendor management: Contracts with claim tech and payment integrity vendors need audit rights, model explainability, and change-control terms.
- Cat response: Surge events will stress human review capacity. Pre-build playbooks and staffing pools to avoid backlogs.
Operational impacts by function
- Claims ops: Add human decision gates at any denial/reduction step. Configure systems to block straight-through denials.
- SIU/fraud: Keep ML scoring for triage, but require human confirmation before adverse actions. Log the reviewer and rationale.
- Health claims: Payment edits and clinical algorithms can recommend reductions, but a qualified reviewer must confirm medical necessity and policy terms.
- P&C claims: For property damage (e.g., storm losses), AI-driven estimate adjustments need human approval before issuing partial payments.
Compliance checklist to start now
- Inventory every AI/algorithm touching claims intake, routing, adjudication, and payment integrity.
- Define "qualified human professional" for each claim type, including credentials and training.
- Insert final human decision steps for denials and reductions; capture name, timestamp, and rationale.
- Revise claim manuals to describe AI usage, data inputs, model purpose, human oversight, and change control.
- Update denial and EOB templates to state that a human made the ultimate decision and why.
- Stand up an audit trail: model versioning, overrides, sampling reviews, and outcome monitoring.
- Run fairness and accuracy testing; document thresholds and remediation plans.
- Amend vendor agreements for explainability, audit rights, data governance, and incident reporting.
Open questions to watch
- How will "qualified human professional" be defined across lines of business?
- What counts as a "portion of a claim" in complex, multi-line, or bundled claims?
- Implementation timing and enforcement mechanisms-grace periods, attestations, or rulemaking?
- How regulators will evaluate manuals and what triggers deeper audits.
Context: claims pressure is real
Florida's storm losses and shoreline erosion have put sustained pressure on claims accuracy and speed. That pressure has accelerated adoption of triage models, estimate assistants, and payment integrity algorithms. HB 527 doesn't ban these tools-it forces human accountability at the point where policyholders feel the outcome most.
What to do this week
- Stand up a cross-functional tiger team (claims, compliance, legal, IT, vendor management).
- Freeze any new auto-denial logic until a human review step is in place.
- Draft the regulator-facing AI section of your claims manual; keep it factual, specific, and current.
- Pilot a human-review queue on top 10 denial/reduction scenarios; measure impact on cycle time and accuracy.
- Train reviewers to write short, evidence-based rationales aligned to policy language.
Follow the policy process
The bill cleared its first House stop with unanimous support and moves on for further consideration. Track official updates and prepare for fast implementation if it passes.
Upskill your team on responsible AI
If you're formalizing AI governance in claims, targeted training helps speed rollout and reduce rework. See curated programs by role:
Your membership also unlocks: