Florida moves to put humans, not algorithms, in charge of insurance claim denials

HB 527 would require a human to approve any AI-based claim denial or payment cut. It cleared a House panel unanimously, with provider backing and insurer pushback.

Categorized in: AI News Insurance
Published on: Dec 11, 2025
Florida moves to put humans, not algorithms, in charge of insurance claim denials

Florida bill would require human review before denying insurance claims that use AI

Florida lawmakers advanced a bill that would require a qualified human to make the final call on any claim denial or payment reduction when artificial intelligence or algorithms are involved.

The House Insurance & Banking Subcommittee unanimously approved HB 527, sponsored by Rep. Hillary Cassel, R-Dania Beach. The measure drew support from hospitals and doctors, and pushback from major insurance trade groups.

What HB 527 does

The bill allows carriers to use AI and algorithms to process and recommend claim outcomes. However, it draws a hard line on adverse decisions: any denial, partial denial, or payment reduction must be decided by a qualified human professional.

HB 527 also requires carriers that use AI or algorithms in claims handling to include specific details in their claims manuals, which are subject to regulatory review. That means more transparency into models, rules, and workflows that influence claim outcomes.

"No Floridian should ever have a claim denied based solely on an automated output," Cassel said. "HB 527 establishes a clear and reasonable safeguard."

Regulatory context

State Insurance Commissioner Michael Yaworsky told senators he wants changes that ensure proper oversight of AI, including disclosure, auditability, and a "human in the loop" with clear expertise. He stopped short of endorsing a requirement like HB 527's human decision mandate.

The discussion is unfolding during a House-declared "Artificial Intelligence Week," with multiple panels examining AI issues. Meanwhile, President Donald Trump said he plans an executive order to prevent states from regulating AI. Cassel argued her bill still stands because federal law gives states primary authority over insurance regulation under the McCarran-Ferguson Act.

Where stakeholders stand

Opposition came from the Florida Insurance Council, the American Property Casualty Insurance Association, and the Personal Insurance Federation of Florida. Their chief concerns: potential delays in claim resolution and overlap with existing fair claims practices.

Support came from provider groups, including the Florida Hospital Association and the Florida Medical Association, whose members regularly submit claims and want assurance that automated tools don't unreasonably restrict payment.

Why this matters for carriers and TPAs

AI-driven claims triage and adjudication are already embedded in many operations. This bill doesn't prohibit those tools, but it would change accountability: adverse outcomes must be owned by a qualified human, and the underlying process needs to be documented in materials regulators can review.

In practical terms, compliance will hinge on governance, documentation, reviewer qualifications, and audit trails that clearly separate machine recommendations from human decisions.

Action steps to prepare

  • Map AI touchpoints: Identify where models and rules influence coverage decisions, medical necessity, payment edits, and fraud flags.
  • Define "qualified human" criteria: Set standards for licensure, training, domain expertise, and authority for final adverse determinations.
  • Implement clear human-in-the-loop gates: Require human review before any denial, partial denial, or payment reduction, with rationale documented.
  • Update claims manuals: Document AI/algorithm use, data inputs, model purpose, decision thresholds, exception handling, and escalation paths.
  • Strengthen auditability: Log model outputs, reviewer identity, decision timestamps, and reasons codes. Ensure reproducibility for regulator review.
  • Disclose AI use where required: Build standardized notices for providers and policyholders if the bill or future rules require it.
  • Monitor performance impact: Track turnaround times, accuracy, appeal rates, and rework to quantify operational effects of added human review.
  • Vendor oversight: Amend contracts to require transparency, documentation, and audit rights for any third-party models or rules engines.
  • Train adjusters and clinical reviewers: Focus on interpreting automated recommendations, bias checks, medical policy application, and recordkeeping.
  • Policy governance: Establish a cross-functional committee (claims, SIU, legal, compliance, IT) to maintain model inventories and approve changes.

What's next

HB 527 cleared its first House stop with unanimous support but faces further debate, especially on speed-to-resolution concerns. Expect continued focus on disclosure, auditability, and human accountability as regulators refine expectations.

Insurers that get ahead on documentation, reviewer standards, and audit trails will be better positioned regardless of final bill language.

Optional resource: If you're skilling up claims, compliance, or SIU teams on practical AI use and oversight, see curated learning paths by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide