Insurers Pull Back on AI Coverage as Billion-Dollar Claims Loom

Major insurers are curbing AI coverage, wary of billion-dollar, cascade-style losses. Expect tighter terms, exclusions, and higher premiums unless controls and contracts improve.

Categorized in: AI News Insurance
Published on: Nov 26, 2025
Insurers Pull Back on AI Coverage as Billion-Dollar Claims Loom

Insurers Scale Back AI Coverage Amid Fears of Billion-Dollar Claims

AIG, Great American, and WR Berkley have moved to limit exposure to AI-driven losses, seeking regulatory approval to restrict claims tied to AI agents and chatbots. This is a clear signal: carriers see AI as a potential source of outsized, systemic losses that current forms weren't built to absorb.

The question underneath it all is simple: who pays when an AI system goes off-script at scale? With more companies deploying AI in live customer flows, a single bad model update could trigger thousands of simultaneous incidents.

Why carriers are pulling back

Traditional liability frameworks assume human decision chains, smaller user groups, and slower propagation. AI flips that-autonomous systems can interact with massive audiences in minutes, and errors can cascade before anyone notices.

Executives are staring at tail-risk scenarios that look more like catastrophe events than software bugs. Internally, many carriers now believe worst-case AI claims could run through reserves and spill into reinsurance layers.

Legal and claims signals you shouldn't ignore

AI-related disputes are accelerating: misinformation from chatbots, privacy violations, IP issues, biased outcomes, and alleged financial harm from automated interactions. Regulators are also paying closer attention to AI outputs, documentation, and controls.

Recent coverage from the Financial Times points to multibillion-dollar claim potential tied to AI failures, a view now reflected in carrier filings and product changes. Financial Times reporting has been echoing these concerns across the market.

Product implications: expect tighter terms and new forms

  • Sublimits, aggregates, and AI-specific exclusions across CGL, Tech E&O, Media/Professional Liability, and Cyber.
  • Clarified definitions for "AI incident," "autonomous system," "generative output," and "model provider."
  • Claims-made structures, earlier retro dates, batch clauses, and incident reporting deadlines calibrated to AI event velocity.
  • Warranties around model governance, human oversight, logging, content filters, and kill-switch capabilities.
  • Vendor and data-source indemnity requirements, with audit rights and evidence of monitoring.

Underwriting focus areas (what will get asked and scored)

  • Use-case map: where AI touches customers, finances, safety, or regulated decisions.
  • Controls: testing, red-teaming, prompt/input filtering, output moderation, abuse prevention, and rollback plans.
  • Governance: documented policies, approval gates, risk sign-offs, and clear model ownership.
  • Telemetry: versioning, logs, event correlation, and data retention for reconstructing incidents.
  • Third parties: model vendors, data licenses, indemnities, and downstream reliance.

Immediate actions for brokers and risk managers

  • Inventory AI systems (internal and third-party), including where they touch customers or regulated processes.
  • Quantify worst-case scenarios by use case: defamation, securities/misadvice, privacy, discrimination, safety, and IP.
  • Harden contracts with AI vendors: indemnity, SLAs, incident cooperation, audit rights, and evidence of controls.
  • Implement practical safeguards: human-in-the-loop for high-impact decisions, content disclaimers, approval gates for releases.
  • Prepare proof: testing reports, red-team summaries, governance records, and logs. This reduces dispute friction and improves placement.

Claims handling: set yourself up before the loss

  • Define what counts as an AI "occurrence" or "batch event" across lines to avoid coverage ambiguity.
  • Align on incident triage: who is notified, what data to preserve, and how to capture model behavior at the time of loss.
  • Pre-select panel counsel with AI and data litigation experience; agree on forensic vendors who understand model forensics.
  • Document disclaimers and user notices where chatbots or decision aids could be misconstrued as advice.

Line-by-line watchouts

  • Tech E&O/Professional: advice errors by chatbots or decision aids; clarify whether "automated advice" is within scope.
  • Media/IP: defamation, copyright in training data or outputs, likeness issues.
  • Cyber: privacy violations, data leakage, prompt injection impacts; expect carve-outs for model-caused third-party harms.
  • CGL: personal/advertising injury from generative outputs; many carriers will push this into specialized forms.

Capacity and pricing outlook

Expect premium increases, tighter aggregates, and broad exclusions for high-risk use cases. Capacity may shift to specialized AI liability products with strict warranties and event definitions.

Small and mid-market insureds that adopted AI for efficiency could feel the squeeze first. Less coverage plus higher retentions will raise operational risk and slow deployments without better controls.

For carriers: portfolio discipline that matters

  • Scenario testing and stress models for multi-insured AI events (shared vendors, library vulnerabilities, major model updates).
  • Exposure caps by use case; facultative placements for outsized risks; syndication for large programs.
  • Clear triggers and exclusions maintained consistently across lines to avoid silent AI exposure.
  • Underwriting data standards: minimum telemetry, governance evidence, and vendor contract quality.

Practical next steps (90-day plan)

  • Update submission packets with an AI addendum: use-case inventory, governance docs, testing evidence, vendor list.
  • Rewrite internal/external FAQs and customer disclosures for any AI-facing flows.
  • Run a table-top AI incident exercise with legal, claims, IT, and comms. Capture gaps and assign owners.
  • Renegotiate key vendor terms to secure indemnity, logging, and response cooperation.
  • Re-market placements early if you rely on AI for customer decisions or content-you will need time for underwriter review.

Helpful frameworks

If you need a common language for controls, the NIST AI Risk Management Framework is a solid baseline for underwriting and risk reviews. Share it with clients and vendors to speed up due diligence. NIST AI RMF

Upskilling your team

Stronger AI governance shortens the underwriting conversation and limits loss severity. If your teams need structured training, see curated options by role at Complete AI Training.

Bottom line

AI risk has scaled faster than coverage language. With major carriers moving to limit exposure, your advantage comes from clarity: know your AI footprint, tighten controls, and lock down contracts now-before renewals force the issue.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →