US Insurers Push to Limit AI Liability Amid Growing Risks
Carriers are asking regulators for room to exclude AI-related losses from corporate policies. The concern is simple: models behave like "an overly opaque black box," and a single failure can ripple across thousands of insureds at once.
This isn't theory. It's a capital management problem. AIG, Great American, and WR Berkley are among those pressing for clarity on how far coverage should go when AI is embedded in everyday operations.
Why exclusions are on the table
- Opacity and attribution: When an AI system makes a decision, cause and fault are hard to prove. That complicates triggers across E&O, cyber, media, D&O, and CGL.
- Concentration risk: Many companies rely on the same models, clouds, and vendors. One flaw can create a multi-insured event.
- Automation at scale: An error can replicate across workflows instantly, driving correlated losses.
- Silent AI exposure: Policies not drafted for AI can pick up losses unintentionally.
Incidents that signal real exposure
- Google's Overview allegedly mischaracterized a solar company and triggered a $110 million defamation lawsuit.
- Air Canada had to honor a discount a chatbot offered, then refund customers after the fact.
- Attackers used a deepfake of an executive to steal $25 million from Arup during a video call.
Insurers can absorb a $400 million loss at a single insured. What they can't absorb is the same AI agent failure causing 10,000 simultaneous claims.
What regulators will want to see
- Clear definitions: What counts as an AI system, AI-generated output, and an AI incident.
- Accumulation controls: How carriers measure and cap exposure tied to specific models, vendors, and clouds.
- Incident reporting: Common taxonomies and timelines for AI-related events.
- Capital and reinsurance: Evidence that systemic AI scenarios are stress-tested and backed by reinsurance.
- Governance frameworks: Alignment with recognized guidance such as the NIST AI Risk Management Framework and state-level expectations tracked by the NAIC.
Practical steps for carriers
- Policy wording: Define "AI System" and "AI Output." Address how errors, hallucinations, defamation, discrimination, and data poisoning are treated. Use exclusions or sublimits where needed.
- Structured endorsements: AI-use warranties (human-in-the-loop for high-stakes decisions, audit logs on, content filters enabled), vendor due diligence requirements, and change-in-risk notice clauses.
- Aggregates that reflect model risk: Introduce per-model or per-vendor aggregates and event-based aggregates across the book for AI incidents.
- Reinsurance design: Add clash/stop-loss for AI events, define "AI event" clearly, consider parametric features for major model or platform outages.
- Exposure coding: Capture the model name, vendor, version, and use case at bind. This is the backbone of accumulation management.
- Underwriting questions that matter: Where is AI used? What decisions are automated? What guardrails exist (reference checks, RAG, rate limits, human review)? How are prompts and outputs logged?
- Vendor contracts: Push for indemnities, incident SLAs, security attestations, red-team reports, and version-change notifications.
- Claims playbook: A fast triage routine for AI incidents: preserve logs, isolate prompts/outputs, secure model/version IDs, and map to coverage triggers.
Guidance for brokers and buyers
- Expect new AI exclusions, sublimits, and warranties on E&O, media, cyber, and CGL.
- Controls will influence price: human review for high-impact decisions, audit trails, model validation, and vendor oversight.
- Disclose AI use early. Silence creates friction and claim disputes later.
- Consider captives or structured solutions for AI-heavy operations.
A balanced path
Exclude what can't be priced. Price what can be measured. Then earn back coverage through controls that reduce frequency and correlation.
A workable compromise is a tiered model: baseline exclusions to cap unknowns, buybacks for defined use cases with audited controls, and reinsurance that treats model failure as an event with clear triggers.
Action checklist (use this today)
- Inventory AI use by insureds; tag model, vendor, version, and critical workflows.
- Update forms: definitions, exclusions/sublimits, endorsements, and disclosure duties.
- Stand up an AI incident definition and claims protocol across lines.
- Run correlation scenarios by model/vendor; adjust aggregates and reinsurance.
- Train underwriting and claims teams on AI failure modes and evidence collection. For a quick skills uplift, see Complete AI Training by job role.
AI isn't a niche add-on anymore; it's embedded in how companies sell, decide, and serve. The market needs clarity on what is covered, what is excluded, and what earns its way back through proof of control. That's how we reduce systemic risk without choking off useful technology.
Your membership also unlocks: