Insurers Move to Wall Off AI Risk: What Exclusions Mean for Your Book
AIG, WR Berkley, and Great American have asked regulators to approve new exclusions that let them deny claims tied to the use or integration of AI systems - chatbots, agents, or components buried in workflows. The driver is obvious: public, expensive AI mistakes are stacking up and pushing fears of systemic losses higher in risk models.
Examples are already costly. Google faces a $110 million defamation suit tied to AI Overview. Air Canada had to honor a discount invented by its own chatbot. UK engineering firm Arup reportedly lost £20 million after staff were fooled by a digitally cloned executive on a video call. Those incidents make clean liability lines hard to draw.
Why carriers are acting now
Mosaic Insurance says LLM outputs are still too unpredictable for traditional underwriting and calls the models a black box. Even though Mosaic sells specialist coverage for AI-enhanced software, it won't underwrite LLM-driven risks like ChatGPT-style systems.
Some proposed exclusions are sweeping. A WR Berkley version would block claims tied to "any actual or alleged use" of AI, even if the tech is a tiny part of a product. AIG told regulators it doesn't plan to switch exclusions on immediately, but wants them ready as frequency and severity rise.
The nightmare scenario: correlated losses
It's not just a single-company hit. The big fear is an upstream model or vendor misfires - and a thousand insureds get clipped at once. As Aon's Kevin Kalinich put it, the market can handle a $400-$500 million loss tied to one company's agent, but not a wave of correlated failures across many insureds.
Endorsements are not a free pass
Some carriers are testing partial fixes via endorsements. QBE added one capping protection for fines under the EU AI Act at 2.5% of the insured limit. Chubb agreed to cover certain AI incidents while excluding anything that could spark widespread simultaneous damage.
Brokers are warning: several endorsements look like added protection but narrow coverage. Read every definition, trigger, and carve-back twice.
EU AI Act overview (European Parliament)
What this means for underwriting, broking, and risk leaders
Immediate actions for carriers
- Define "AI" precisely. Avoid blanket "any actual or alleged use" language unless you intend to exclude almost everything modern software touches.
- Add AI-specific warranties: inventory of models used, vendor list, guardrails, human-in-the-loop for critical decisions, and kill-switch procedures.
- Use sublimits, coinsurance, or higher retentions for LLM-related exposures. Consider event definitions for correlated model/vendor failures.
- Require evidence: model/version IDs, prompt/activity logs, content filters, safety test results, and third-party audit reports (where feasible).
- Treat upstream providers as catastrophe-like exposure. Stress-test clash risk in reinsurance and aggregation models.
What brokers should do now
- Map exclusions and endorsements line-by-line. Watch for silent AI exclusions hidden in definitions of "technology services," "professional services," or "media content."
- Scenario-test wordings: chatbot misstatements, content defamation, AI-driven pricing errors, fraudulent deepfakes, and vendor outages.
- Negotiate carve-backs for unintended outcomes, narrow "widespread damage" language, and clearer triggers for media, E&O, and cyber interplay.
- Align contracts: push for vendor indemnities, service credits tied to AI incidents, and notification duties that match policy conditions.
Guidance for insureds deploying AI
- Inventory your AI: where it lives, who maintains it, what decisions it influences, and its upstream dependencies. Keep a current bill of materials (models, datasets, APIs, plugins).
- Build controls: approvals for use cases, human oversight for high-impact actions, rollback plans, and sandbox testing before production.
- Log everything: prompts, outputs, model/version, feature flags, and vendor incident tickets. This cuts loss size and speeds claims.
- Harden against social engineering and deepfakes: identity verification on high-value payments, multi-channel callbacks, and executive video/audio authentication steps.
- Contract for AI risk: explicit vendor obligations on security, model behavior, uptime, notification, and indemnity. Don't rely on generic MSAs.
- Tune disclaimers and UX on customer-facing chat and search. Make it clear what is advisory, what is binding, and where human review applies.
Policy language to watch
- AI definitions that swallow routine software or analytics.
- Broad "any actual or alleged use" phrasing without proportionality or intent thresholds.
- Exclusions that remove media, E&O, or cyber coverage when AI is even tangentially involved.
- Endorsements that add a small benefit (e.g., limited fine cover) but introduce sweeping new exclusions elsewhere.
Claims preparation checklist
- Preserve evidence early: logs, prompts, outputs, timestamps, model IDs, vendor notices, and user access records.
- Quantify fast: customer refunds, operational downtime, reputational harm proxies (traffic, churn), and third-party legal claims.
- Coordinate policies: cyber, tech E&O, media liability, crime, and D&O for disclosure risk. Avoid gaps caused by AI-triggered exclusions.
Market trajectory: what to expect next
- More filings seeking broad AI exclusions, then gradual carve-backs for targeted, well-controlled use cases.
- Event-based structures for correlated vendor or model failures in reinsurance and primary wordings.
- Growth in sublimits and coinsurance for LLM components until loss data improves.
- Closer alignment between cyber, tech E&O, and media policies to handle AI-generated content and automation errors.
- Regulatory pressure around disclosure and fines as the EU AI Act phases in.
Practical next steps for your team
- Update underwriting guidelines and broker checklists specific to AI/LLM risk.
- Create a one-page AI addendum for submissions: inventory, controls, vendor list, and incident playbook.
- Run a tabletop on the correlated-loss scenario: major model outage or unsafe update hitting many insureds at once.
- Educate clients and staff on AI failure modes, deepfakes, and policy implications. Upskilling helps reduce avoidable losses and documentation gaps.
If you need structured upskilling for clients or internal teams, see curated programs by role at Complete AI Training.
Bottom line
Carriers and regulators are redrawing boundaries fast. As exclusions widen and endorsements get sharper, more AI risk will sit on insureds' balance sheets. Get precise on language, push for smart carve-backs, and tighten controls now - before the next wave of correlated losses tests the market.
Your membership also unlocks: