Burned by Wildfire, Burned by AI: How Insurers Slow Payouts and Drop Coverage

Insurers tout AI to speed claims, but post-cat misfires erode trust. Keep humans in the loop, explain decisions, and price to local reality to avoid backlash.

Categorized in: AI News Insurance
Published on: Jan 31, 2026
Burned by Wildfire, Burned by AI: How Insurers Slow Payouts and Drop Coverage

AI, Wildfire Risk, and Consumer Trust: What Insurance Leaders Must Do Now

Wildfires and severe storms are hitting harder. Carriers are leaning on AI to keep up. The promise is efficiency and sharper risk signals. The risk is eroding trust in the moments that matter most.

Consider William May, whose Pacific Palisades home was destroyed in the January 2025 L.A. wildfires. He says State Farm's estimate came in below what he believes is needed to rebuild, while neighborhood values rose sharply. He blames an AI-enabled estimating stack, pointing to Xactimate's line-by-line approach that can miss local realities during post-cat inflation.

State Farm says it's "committed to helping customers throughout the entire recovery process," noting more than $5 billion paid out after the fires and encouraging customers to reach out with concerns. Verisk says Xactimate's AI is limited to tasks like summarization or photo labeling under human review, and that the platform does not use AI to set repair costs. They emphasize a market-based, human-validated cost database that adjusters can tune to local conditions.

Across the industry, AI is pitched as a way to sharpen underwriting, process claims faster, and reduce leakage. One major carrier swung from a $6.3 billion loss in 2023 to a $5.3 billion profit in 2024, with executives crediting better prediction and prevention. Consultants forecast sizable drops in claims leakage. Vendors say AI can sift "mountains" of conflicting data so quotes don't overexpose balance sheets.

But there's a growing list of risks: biased inputs, opaque logic, brittle models under climate volatility, and overreliance on automation for complex, high-emotion claims. Lawsuits from California to Illinois allege underpayment, discrimination, and "AI-washing." Los Angeles County has opened a probe into AI tools allegedly used to delay or deny claims. In parallel, nonrenewals are expanding in high-risk geographies, pushing affordability and availability to the brink.

What this means for insurance leaders

  • Trust is the new combined ratio. Post-cat claims decisions define your brand more than any marketing campaign.
  • Regulatory scrutiny is here. Expect demands for documentation, human-in-the-loop controls, and explainability standards - see the AI Learning Path for Regulatory Affairs Specialists for regulator-focused training on explainability and compliance.
  • Vendor risk is model risk. Your accountability doesn't end at the API. Third-party data and algorithms are squarely on the hook.
  • Fairness is measurable. If models or rules slow or underpay specific groups, you'll face legal and reputational consequences.
  • Product design is shifting. Parametric features and community-level solutions can stabilize coverage where indemnity struggles - consult Design - AI Design Courses for guidance on responsible product and explainability-focused design.

Action plan for responsible AI in P&C

  • Establish AI governance. Inventory every model and decision rule touching underwriting, pricing, claims, SIU, and customer comms. Assign clear owners, and consider the AI Learning Path for CIOs for governance, risk management, and leadership curricula.
  • Human-in-the-loop by default for adverse decisions. Denials, partial denials, and complex scope disagreements require credentialed adjuster or supervisor signoff.
  • Document explainability. For each model: purpose, inputs, limitations, and human override paths. Keep consumer-facing explanations plain and specific.
  • Bias and drift testing. Run pre-deployment and quarterly checks for disparate impact (approval rates, cycle times, payout variance) with remediation plans.
  • Third-party oversight. Contractual rights to audit vendors, review training data sources, update cost databases, and switch off problematic features fast.
  • Claims cost realism. Validate estimators against local, post-cat market conditions. Require field adjuster calibration and contractor feedback loops.
  • Appeals and escalation. Offer easy, fast reinspection and second-look pathways. Track overturn rates as a health metric.
  • Privacy and aerial/IoT use. Disclose data sources, get proper consent, and align with state privacy laws and model bulletins.
  • Training and playbooks. Upskill adjusters and underwriters on how tools work, where they fail, and how to override confidently.
  • Audit trail. Log every automated recommendation, human override, and reason code. Regulators will ask.

For policy guidance, see the National Association of Insurance Commissioners' AI framework and the FTC's overview on AI and discrimination risk in decisioning tools here.

Post-cat claims: a practical playbook

  • Triage smart, decide human. Use AI for intake, routing, and fraud flags; keep scope, coverage, and payout decisions human-led in complex losses.
  • Localize pricing fast. Surge pricing for materials and labor changes quickly. Update cost databases weekly. Require on-the-ground validation.
  • Vendor calibration. For tools like Xactimate, publish when and how adjusters should adjust line items. Audit variance vs. contractor invoices.
  • Customer communication. If a tool informed an estimate, say so, and explain how humans reviewed it. Invite documentation and reinspection requests.
  • Cycle-time visibility. Monitor time-to-first-payment, supplement rates, and reopen rates by region and customer segment. Fix bottlenecks immediately.
  • Dispute resolution. Offer neutral appraisal or mediation pathways with clear SLAs. Track outcomes to improve guidelines.

Underwriting and market stability

  • Be precise with retreat. If you must nonrenew, use transparent criteria, offer mitigation paths, and coordinate with FAIR Plans to reduce shocks.
  • Pair indemnity with parametric. Use objective triggers (wind speed, wildfire perimeter, flood depth) for rapid relief that eases hardship and litigation.
  • Community-level solutions. Explore pooled or municipal coverage where single-risk economics fail. Pilot with clear metrics.
  • Monitor "bluelining" risk. Test for geographic patterns that track income or race proxies. Document safeguards and adjustments.
  • Regulator-ready packages. Keep model documentation, fairness results, and consumer notices in one place for exams and subpoenas.

Lawmakers in several states are moving to require human review for denials and other adverse actions. That aligns with where the industry should be headed: AI as an advisor, humans as final decision-makers, with clear reasons given to customers.

What to watch next

  • County and AG investigations. Expect broader information requests on AI use in claims and underwriting.
  • Class actions on bias and underpayment. Track outcomes and adjust oversight and communications ahead of rulings.
  • New AI model bulletins. More states will likely adopt guidance that mirrors or extends NAIC recommendations.
  • Data-source scrutiny. Aerial imagery, credit-adjacent variables, and third-party risk scores will face tougher disclosure and consent rules.

The industry can't ignore climate math. But it also can't afford to lose the customer on process. Use AI to speed the right things - intake, triage, documentation - and keep people in charge of judgment. That balance is how you protect both solvency and trust.

If your teams need structured upskilling on practical AI use and oversight, explore curated options by job function.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)