Insurers Won't Touch AI-Take the Hint

Insurers are moving to carve out AI liability as opaque systems threaten multi-line losses. The playbook now: price by use case, require proof of controls, set sublimits.

Categorized in: AI News Insurance
Published on: Nov 26, 2025
Insurers Won't Touch AI-Take the Hint

AI Liability Is Spooking Insurers. Here's What That Signals - And What To Do About It

Insurers don't scare easily. We price hurricanes and wildfires for a living. But the current trend is clear: top carriers are asking to exclude AI liability because the loss potential is huge, the systems are opaque, and the case law is thin.

Recent reporting shows firms like AIG, American Financial Group's Great American, and WR Berkley seeking permission to carve AI out of coverage. The worry isn't theoretical - it's balance-sheet math. One LLM gone wrong, one data leak, one automated error at scale, and you're staring at multi-line, multi-jurisdiction losses.

What's Driving The Pullback

  • Unbounded severity: AI can scale mistakes across millions of users in minutes. That's product liability, E&O, media/IP, and cyber exposure all tied together.
  • Attribution fog: Vendor model, client prompts, training data, or third-party plug-ins - who's at fault? Defense costs spike before facts settle.
  • Cyber uplift: AI is helping attackers ship smarter malware and exploit new weaknesses in AI-enabled stacks.
  • Opaque models: "It's too much of a black box," said Dennis Bertram, Europe head of cyber insurance for Mosaic, in comments reported by the FT. Mosaic may cover some embedded software risks, but it's avoiding LLM exposure.

Where Exclusions Are Aiming

Requests include broad wording to avoid claims from "any actual or alleged use" of AI or any product that "incorporates" AI. Some carriers signal these are precautionary filings, but the message to the market is plain: the default stance is caution until there's data, standards, and case law.

The AI Exposure Stack (Know What You're Pricing)

  • LLM output risk: Defamation, discrimination, bad advice, IP infringement, privacy violations.
  • Automation risk: Autonomous actions that trigger financial loss, safety incidents, or regulatory breaches.
  • Data supply chain: Tainted training data, scraped content, and consent issues.
  • Model supply chain: Base model, fine-tuning, third-party plug-ins, and orchestration layers.
  • Security: Prompt injection, model theft, data leakage, and shadow AI inside enterprises.
  • Accumulation: A single foundation model failure impacting thousands of insureds at once.

Underwriting Playbook (Practical, Now)

  • Triage by use case: Separate embedded analytics from customer-facing LLMs and fully automated decisioning. Price and treat them differently.
  • Controls questionnaire: Require model provenance, data governance, RLHF/guardrails, human-in-the-loop, monitoring, rollback, and kill-switch procedures.
  • Vendor risk: Ask for model providers, versions, plug-ins, and contractual indemnities. Verify logging and incident response commitments.
  • Secure-by-default: Look for isolation of secrets, content filtering, prompt hardening, and red-teaming. Ask for evidence, not promises.
  • Governance: Board-level oversight, documented AI policies, bias testing, and audit trails tied to key decisions.
  • Metrics that matter: Hallucination rates on production tasks, harmful-output rates, drift alerts, and time-to-disable.

Policy Language That Reduces Shock

  • Precise definitions: Define "AI," "model," "automated decisioning," and "generative output" to avoid spillover into unrelated software.
  • Sublimits and coinsurance: Apply to LLM output liability, automation-triggered loss, and data ingestion disputes.
  • Claims-made clarity: Address continuous training and versioning so retro dates and prior acts aren't loopholes.
  • Warranties: Client attests to model monitoring, data rights, and vendor controls with breach tied to coverage consequences.
  • Carve-backs: If using a broad AI exclusion, restore coverage for tightly defined, lower-risk use cases with verified controls.

Portfolio Management

  • Aggregation mapping: Track exposure to major foundation models and AI platforms across the book.
  • Scenario tests: Model correlated outages, harmful-output incidents, and regulatory actions hitting entire sectors.
  • Reinsurance: Seek AI-specific event definitions and clear hours clauses for cyber/tech events driven by model failure.
  • Reserving discipline: Expect longer-tail disputes over attribution, IP, and discrimination. Strengthen defense-cost assumptions.

Broker Guidance For Clients

  • Inventory AI: Map every AI system, decision, and vendor. Unknown use equals unfunded risk.
  • Document controls: Turn security, governance, and testing into artifacts you can hand underwriters.
  • Accept structure: Expect exclusions, sublimits, and retentions. Use evidence to win back carve-backs and better pricing.
  • Contract upstream: Push obligations and indemnities to model vendors and integrators.

Regulatory And Standards To Watch

Bottom Line For Carriers

Pulling back on AI isn't fear - it's discipline. Until we get cleaner data, stronger controls, and case law, broad exclusions and tight sublimits are rational.

But there's business to write. Start with defined use cases, demand evidence of control, and price the uncertainty. Move from blanket "no" to a documented "yes, under these conditions."

Want Practical Training For Insurance Teams?

If you're building internal AI risk fluency for underwriting, claims, or cyber, see curated options by role here: Courses by Job. For a quick scan of current options, check the latest updates: Latest AI Courses.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →