Insurers Are Pulling Back From AI Liability. Here's What To Do About It.
Three major carriers - AIG, Great American, and WR Berkley - are asking U.S. regulators to let them exclude AI-related liabilities from corporate policies. The reason is blunt: model outputs are too opaque to price with confidence.
That's not a hedge. It's a statement about correlated exposure and portfolio-level fragility. When risk professionals step back, the market should pay attention.
The Buzz
- Carriers petitioning regulators to exclude AI liability from corporate policies
- $25 million deepfake fraud and a $110 million lawsuit tied to Google's AI show real damages
- Systemic risk fears: one model failure could trigger thousands of claims at once
- Pullback could push firms to self-insure or slow AI adoption
Recent incidents highlight why underwriters are uneasy. Google's AI Overview allegedly defamed a solar company, which responded with a $110 million claim. Air Canada had to honor a discount its chatbot invented after a customer took the airline to small claims court. Fraudsters used a cloned executive to extract $25 million from Arup over a "normal" video call.
The loss severity is real, but the portfolio math is worse. If thousands of insureds lean on the same foundation models from OpenAI, Google, or Microsoft, a single failure mode can create a surge of near-simultaneous losses. Brokers warn the problem isn't one $400 million event - it's 10,000 mid-size hits landing in the same quarter.
According to reporting from the Financial Times, carriers say AI is too much of a black box to rate cleanly. If regulators approve AI exclusions, we're effectively creating a new class of uninsurable operational risk. That forces buyers to rethink deployment pace, coverage architecture, and balance-sheet tolerance.
Some specialty markets are experimenting with AI-specific wordings, often priced to reflect the uncertainty. If the big carriers won't participate at scale, the risk is fundamentally different from traditional perils the market is built to absorb.
What This Means For Insurance Professionals
Your clients are adopting AI in customer service, underwriting ops, finance, and marketing. Many assume their package, cyber, media, or E&O policies will respond to AI-driven mishaps. Many will be surprised.
This is the moment to map the gap, reset expectations, and push controls that reduce both frequency and aggregation.
Coverage Architecture: Actions To Take Now
- Run an AI coverage gap review: map exposures across defamation, product liability, privacy, IP, media, E&O/Tech E&O, crime, and cyber. Identify where AI exclusions, sublimits, or "failure to supervise automation" clauses could limit recovery.
- Tighten definitions: clarify what qualifies as "AI system," "model," "agent," "training data," and "output" to avoid gray areas at claim time.
- Add sensible sublimits rather than blanket denials where feasible. Pair with risk controls and warranties to keep the line insurable.
- Draft aggregation and event language: define an "AI event" (e.g., a specific model/version failure) for consistent treatment across policies and treaties.
- Coordinate lines: avoid silent AI exposures creeping into media liability or general liability via ambiguous wording.
Underwriting & Risk Controls To Require
- Governance: named AI risk owner, model registry, change management, and audit logs for prompts, outputs, and approvals.
- Human-in-the-loop: documented review gates before external communications, financial actions, or customer-impacting decisions.
- Vendor risk: third-party model SLAs, indemnities, security attestations, and incident reporting obligations.
- Security basics for fraud: multi-person approvals for funds transfer, call-back verification, deepfake awareness training, and identity-proofing for high-value actions.
- Testing: red-teaming, bias and hallucination tests, and rollback plans for model updates. Kill-switches for critical workflows.
- Data hygiene: provenance controls, IP screening, and content filters to reduce defamation or copyright exposures.
Broker Advisory: Prepare Clients For Tradeoffs
- Set expectations on pricing and scope: explain why correlated AI risk breaks traditional rating and why sublimits or exclusions may be unavoidable.
- Explore alternatives: captives, higher retentions, parametric structures tied to defined AI outage/error triggers, or narrowly scoped buy-backs.
- Contractual transfer: push stronger vendor indemnities and warranties; align with incident notification and evidence retention.
- Scenario planning: model a multi-client outage of a common model and quantify liquidity needs if coverage is limited.
Claims & Reserving Readiness
- Evidence: preserve logs, model versions, prompts, outputs, and decision trails to prove causation and timing.
- Causation analysis: distinguish AI-generated error from human misuse or process failure to determine coverage path.
- Aggregation monitoring: track common vendors/models across the portfolio to anticipate event clustering.
Reinsurance & Capital
- Clarify treaty wordings for AI-driven events, including event definitions and hours/aggregation mechanics.
- Stress-test accumulation across lines touching AI exposure. Update capital models for correlated, near-simultaneous claims.
- Consider industry loss warranties or portfolio caps for defined AI events if available.
Regulatory Signals To Watch
If regulators approve broad AI exclusions, expect a hardening market and stricter underwriting questionnaires. Buyers may slow rollout, self-insure more risk, or re-architect workflows to reduce automation exposure.
Keep an eye on guidance such as the NAIC's work on AI governance and model risk. It will influence both expected controls and claims handling standards.
Bottom Line
Carriers are willing to insure oil rigs and rockets. They're hesitating on AI decisions that can replicate across thousands of insureds at once. That tells you everything about the current risk profile.
For now, the playbook is simple: clarify coverage, tighten controls, plan for aggregation, and price the uncertainty. If clients insist on aggressive AI adoption, help them do it with eyes open - and balance sheet ready.
Financial Times coverage of insurer AI exclusions
NAIC resources on AI governance and model risk
Practical AI courses by job role (for risk and insurance teams)
Your membership also unlocks: