Fighting Fire

After L.A.'s 2025 wildfires, William May says AI-tied estimates lowballed his rebuild. Insurers tout efficiency, but the fix is clear: human review, local costs, and transparency.

Categorized in: AI News Insurance
Published on: Mar 11, 2026
Fighting Fire

AI, wildfires, and the claims squeeze: what insurance pros need to do now

William May's Pacific Palisades home burned to the slab in the January 2025 L.A. wildfires. He remembers a "fireball burning everything in its path." The retired pediatrician wants one thing: to rebuild.

His fight now is over numbers. He said the two-story home he bought in 2017 for $1.7 million was valued lower by his carrier after the fire - an estimate of $1.35 million. Meanwhile, Zillow's index shows average neighborhood values up roughly 50% since 2017. "How can it be worth less now than it was when it was new?" He blames State Farm's use of Xactimate, saying the software "counts every screw" and lowballs rebuild costs.

May says he's fortunate enough to front the rebuild. Many neighbors can't. He also points to Verisk, which makes Xactimate: "These programs are sold to help insurers squeeze people for profit," he said.

State Farm said it is committed to paying all benefits available under policies and has issued over $5 billion to families affected by the fires. The company urged customers with concerns to reach out directly.

Verisk said Xactimate's AI features support tasks like summarizing information or labeling photos and "always operate under human review and control." The company said Xactimate "does not generate repair costs using AI," and that its construction cost database is market-based, transparent, human-validated, and adjustable for local conditions.

Insurers are leaning on AI to boost efficiency and sharpen risk modeling. State Farm posted $5.3 billion in net income in 2024 after a $6.3 billion loss in 2023. Leadership has said AI and other tech will help predict and prevent losses as climate volatility rises.

The promise and the potholes

AI can parse huge datasets, improve underwriting discipline, and help carriers price and manage risk with more precision - even in areas under severe climate stress. That's the pitch.

The criticism: shaky climate predictions, bias, privacy concerns, opacity, and "hallucinations." Watchdogs warn that speed-optimized systems can short-circuit complex claims that need human judgment. From California to Alabama to Illinois, lawsuits allege AI-driven underpayments, discriminatory impacts on nonwhite policyholders, and improper nonrenewals. Plaintiffs call it AI-washing - using "AI" to justify decisions that hurt consumers.

Los Angeles County recently launched a probe into whether AI tools delayed or denied wildfire claims, requesting State Farm's policies, training materials, and directives tied to AI use in claim reviews.

State Farm previously announced it would not renew about 72,000 California property policies through 2025, citing wildfire risk and related costs. The company noted recovery "doesn't move in a straight line," and said families are still working through rebuilds and claim steps.

Consultants see upside. A Bain & Co. paper projects a 30%-50% reduction in claims leakage with generative AI. A white paper by CAPE Analytics argues AI is needed to sift through contradictory data so insurers don't overexpose themselves with underpriced coverage or lose premium by overpricing.

Policyholder advocates see risks. Florida attorney Chip Merlin warns that AI can make decisions on incomplete or biased data, leading to unfair outcomes. He pointed to a 2022 Illinois class action alleging algorithms disproportionately delay repairs and payments for Black policyholders. The case is pending; State Farm says its practices comply with the law.

Amy Bach of United Policyholders says affordability and availability are being driven by climate change and insurer tech choices - including AI, predictive analytics, and aerial surveillance. Asked what benefits consumers are getting from AI today, she said: none.

Monica Palmeira of the Greenlining Institute warns AI can fuel "bluelining" - withdrawal or steep price hikes in climate-exposed areas, often the same communities excluded in the past. Lose insurance, lose mortgages. Then the contagion spreads.

What to do now: practical steps for carriers and claim teams

  • Adopt human-in-the-loop by default. Use AI for triage and assistive tasks. Keep final coverage and payment decisions human-owned, with named accountability.
  • Disclose AI use to customers. Provide plain-language explanations of what the system does, data sources, and how to request human review.
  • Vendor governance. Require transparency into model purpose, data lineage, validations, cost indexes, and local adjustment methods. Secure audit rights.
  • Localize cost estimates. Calibrate tools like Xactimate with current local labor, materials, access, code upgrades, and supply chain constraints. Document manual overrides.
  • Outlier controls. Flag claims where AI output differs >20% from credible market signals (permits, contractor bids, MLS/Zillow trends, code upgrades). Auto-escalate for senior review.
  • Fairness testing. Run regular disparate impact checks using proxy variables (geography, dwelling age, construction type). Track remediation plans.
  • Model risk management. Maintain model cards, performance drift monitoring, change logs, and rollback plans. Separate development, validation, and production access.
  • CAT playbooks. Pre-approve advance payment protocols, temporary housing limits, and expedited desk review rules. Publish SLAs for escalations and appeals.
  • Data minimization and privacy. Collect only what's necessary. Set retention schedules. Restrict reuse for underwriting without consent and regulatory clearance.
  • Generative AI guardrails. Ban use of LLM outputs for final valuations or prices. Use them to summarize files and label photos under human review.

Policy and compliance watchlist

  • Transparency and governance expectations are rising. See the National Association of Insurance Commissioners' AI guidance for principles on accountability, fairness, and consumer disclosure. NAIC AI resources
  • States are moving toward human review requirements. Florida legislation introduced by Rep. Hillary Cassel aims to ensure people, not algorithms, make denial decisions.
  • Expect more document requests like the Los Angeles County probe. Keep AI policies, training materials, and usage logs inspection-ready.

Product moves to consider

Explore parametric add-ons for climate perils. Triggered by objective thresholds (wind speed, quake intensity, heat, flood depth), they can deliver fast, baseline payouts alongside indemnity coverage. The World Bank offers a clear primer on how this works: Parametric insurance overview.

In high-risk zones, test community-level solutions. A city, special district, or HOA can purchase coverage for an entire neighborhood. Pair this with honest, locally led planning where relocation may be necessary.

Bottom line

AI can help carriers price risk and process claims faster, but black-box decisions erode trust and invite litigation. Keep models assistive, make decisions explainable, and prove fairness with data.

If stories like William May's are the test, the bar is simple: fast, fair, local-cost-aware, and human-reviewed. Teams that hit that bar will write business where others pull back - and keep it.

Looking to level up your team's capabilities with responsible, real-world applications? Start here: AI for Insurance


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)