AI Too Risky: Why Major Insurers Are Pulling Back
Major carriers are asking U.S. regulators to exclude AI-related liabilities from corporate policies. Names in the mix include AIG, Great American, and WR Berkley. The blunt rationale: current models are too opaque, too agentic, and too correlated to price in any reliable way.
Recent incidents didn't help. Google's AI Overview kicked off a high-dollar defamation suit, Air Canada paid for a chatbot's fabricated discount, and a deepfake video call drained $25 million from Arup. One-off losses are tolerable; a simultaneous wave is not.
Why AI Breaks Traditional Underwriting
AI creates accumulation risk that looks more like systemic cyber than traditional E&O. A single model update or prompt-injection pattern can propagate across thousands of customers at once. Underwriters can price a $400 million hit to one insured; they can't carry 10,000 medium claims triggered overnight.
Models are black boxes to most buyers and many vendors. Versioning is fluid, guardrails vary, and post-deployment behavior changes with new data and integrations. That instability pushes carriers to carve out exposure rather than gamble on thin loss data.
Where the Silent Exposure Lives
- Media/E&O: Defamation, IP, false advertising from AI-generated content and recommendations.
- Cyber/Crime: Deepfake-enabled social engineering, automated fraud, data leakage via AI tools.
- GL/Product: Automated decisions causing bodily injury or property damage (e.g., logistics, industrial controls).
- D&O: Disclosure risk around AI claims, controls, and governance.
Loss Scenarios That Scale
- Model update introduces a subtle error that misprices, misroutes, or mislabels at thousands of clients.
- Chatbots make uniform false claims or guarantees, triggering mass refund and class-action activity.
- Deepfake fraud playbook spreads, exploiting the same weak approval workflow across many insureds.
These are correlated exposures. That's the carrier's nightmare.
Broker Actions: Set Expectations and Reframe Risk
- Tell clients they are effectively self-insuring AI-specific failures unless they buy dedicated, negotiated endorsements.
- Map AI use to lines of coverage: who builds, who buys, and where advice, content, or automated actions hit customers or revenue.
- Push vendors for indemnities, audit rights, log retention, and model version disclosure. Transfer what you can upstream.
- Bundle control evidence with submissions: testing results, human-in-the-loop steps, kill switches, and incident playbooks.
Underwriting Intake: What to Ask Now
- Use inventory: Where is AI embedded? Internal ops, customer-facing content, pricing/eligibility, safety systems.
- Decision criticality: Can the AI approve, deny, dispatch, transfer, or publish without human review?
- Controls: Guardrails, red-team testing, hallucination/error rates, escalation thresholds, and rollback plans.
- Traceability: Logging, versioning, prompts, outputs, training data sources, and vendor update cadence.
- Change management: Who signs off on new models or prompts? How quickly can they be reverted?
Policy Language: Keep It Explicit
- Exclusions with path-to-buyback: Start with a clear exclusion; allow scheduled buybacks for defined use cases with sublimits and aggregates.
- Definitions: Spell out "automated decision," "generated content," "synthetic media," and "AI service provider."
- Trigger clarity: Cover resulting loss from AI failure vs. exclude the cost to fix or retrain models.
- Crime/Cyber coordination: Treat deepfake/social engineering as a named peril with verification conditions.
Controls Worth Actual Credit
- Two-person verification for payments, vendor changes, and sensitive data access, regardless of executive requests.
- Human-in-the-loop before publishing, adjudicating, or executing high-impact actions.
- Content disclaimers and throttles for customer-facing AI, plus rate limiting and abuse detection.
- Vendor governance: SLAs with indemnities, breach notice, usage logs, and the right to suspend updates.
- Alignment to standards like the NIST AI Risk Management Framework.
Capacity and Pricing Discipline
- Use tight sublimits, per-event and annual aggregates for AI perils.
- Apply coinsurance where clients refuse key controls or vendor transparency.
- Watch vendor concentration. A widely used model or platform equals correlated loss potential.
Claims Readiness
- Preserve logs, prompts, outputs, and model versions at first hint of a dispute.
- Document human reviews and approvals to show diligence.
- For synthetic-fraud events, capture call/video metadata and verification steps attempted.
- Coordinate between cyber, E&O, and crime adjusters early to avoid coverage whiplash.
What Could Reopen This Market
- Transparent models with audit trails, stable versioning, and predictable failure modes.
- Common definitions across policy forms to reduce silent AI exposure.
- Meaningful, verifiable controls that cut frequency: stronger verification, gated automation, and tested guardrails.
- Better data: incident sharing, loss coding for AI causes, and outcomes tied to controls.
For now, the message is simple: if risk professionals won't price it, buyers are flying solo. Tight controls, clean contracts, and explicit policy language are the only real safety net until the market finds footing.
Worth a read on the social-engineering front: BBC's report on the Arup deepfake heist here. If your clients are scaling staff-facing AI quickly, structured training can reduce avoidable errors-see role-based options here.
Your membership also unlocks: