Insurers Hit the Brakes on AI Risk as Regulators Weigh Exclusions

Insurers are moving to ring-fence AI risk with exclusions, tighter terms, and lower limits after a run of messy losses. Coverage will hinge on governance and proof of controls.

Categorized in: AI News Insurance
Published on: Nov 24, 2025
Insurers Hit the Brakes on AI Risk as Regulators Weigh Exclusions

AI Liability Is Being Ring-Fenced: What Carriers, Brokers, and Risk Managers Need to Do Next

Commercial insurers in the U.S. are drafting exclusions for AI-related liabilities and seeking state approval, according to the Financial Times. That's a stark signal from an industry built to price uncertainty. The view on the ground: at scale, today's AI risk is uninsurable without tighter terms, better data, and stronger controls.

Why Underwriters Are Hitting Pause

  • Opacity: Large models fail in ways that are hard to trace, validate, or litigate. Evidence is thin, causation gets messy, and claims handling slows.
  • Correlation: The same model stack sits across thousands of insureds, so one update, jailbreak, or vendor outage can trigger correlated losses.

Carriers will price a single $1B loss. They can't absorb 10,000 nine-figure losses tied to the same model event. That's systemic risk - the kind that turns a niche failure into a marketwide hit.

The Losses Are Real

  • A defamation suit tied to an AI summary reportedly sought $110 million from a major tech company.
  • Air Canada was forced to honor a discount its chatbot invented.
  • Fraudsters deepfaked a senior executive at an engineering firm to approve a $25 million transfer.

None of these needed exotic research. Off-the-shelf tools plus weak controls were enough. Underwriters see exposure spanning defamation, product liability, copyright, privacy, employment, securities disclosure, and social-engineering fraud made worse by deepfakes.

From Silent Cyber to Silent AI

A decade ago, carriers found "silent cyber" hiding in traditional forms and responded with exclusions, sub-limits, and standalone cyber. AI is on the same path. Expect tightening across GL, E&O, D&O, and cyber - with AI-specific limits, coinsurance, required controls, and narrower "occurrence" definitions.

Reinsurers want clarity before adding capacity. We've seen this movie: the Lloyd's cyber market pushed for war exclusions to cap tail risk in 2023. AI will get similar treatment.

Regulators and the Wording Fight

State approvals will run through NAIC processes. Expect pointed questions: does an exclusion hit only vendor models or also in-house builds and simple automations? How will a carrier distinguish an AI-caused error from a human mistake?

Governance is the path to insurability. Carriers need repeatable, auditable controls that map to recognized frameworks like the NIST AI Risk Management Framework and ISO/IEC 42001. Model provenance, evaluations, red-team results, vendor attestations, and logs double as underwriting evidence.

What Near-Term Coverage Could Look Like

  • Exclusions/carve-outs: Hallucinations, autonomous actions, training-data IP claims.
  • Sublimits/coinsurance: Especially for deepfake-driven social engineering and model-related outages.
  • Warranties/conditions: Human-in-the-loop approvals for high-risk actions, kill switches, comprehensive logging.
  • Parametric options: Payouts triggered by defined events (vendor outage, model rollback) to avoid causation fights.

Longer-Term Capacity: Think Pooled or Public-Private

For AI catastrophe scenarios, the market may pivot to pooled or renewable, government-supported capacity - similar to terrorism and flood programs. Reinsurers have warned for years that correlated cyber exposures need shared approaches. AI brings the same coordination problem with a different failure mode.

How Insureds Can Stay Insurable While Adopting AI

  • Inventory and register: Catalog models, vendors, versions, data touched, and where automation is allowed.
  • Access and data hygiene: Role-based access, secrets management, PII minimization, and clear data lineage.
  • Change control: Gate model updates with approvals, pre-prod testing, rollback plans, and a hard kill switch.
  • Monitoring and logging: Capture prompts, outputs, decisions, and interventions; retain logs for audit and claims.
  • Human-in-the-loop: Mandatory sign-off for high-impact actions (payments, shipments, legal comms, HR decisions).
  • Procurement discipline: Indemnities, SLAs, security addenda, model transparency, and audit rights for third parties.
  • Testing and red-teaming: Safety, bias, jailbreak, performance drift, and content moderation assessments.
  • Fraud and deepfake playbooks: Call-back controls, multi-channel verification, and staff drills.

What to Hand Your Underwriter

  • Model inventory and provenance docs; who built or supplied each model and why.
  • Evaluation and red-team summaries, including adverse scenarios and mitigations.
  • Policy artifacts: access controls, change management, logging standards, and kill-switch procedures.
  • Incident response runbooks and tabletop results for AI misuse and vendor outages.
  • Third-party terms: SLAs, indemnities, data-processing addenda, and audit rights.
  • Framework mapping: attestations against NIST AI RMF and ISO/IEC 42001.

Bring your broker in early. Solid controls won't erase exclusions, but they can preserve broader terms and pricing as the market hardens. If your team needs a quick primer on AI risk and governance practices, see our AI courses by job.

The Bottom Line

Insurers are clear: AI is promising, but the worst-case scenarios won't be bankrolled without stronger governance. Whether this nudges safer deployments or shifts more risk to buyers will be decided in regulatory hearing rooms - not by a model release blog post.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →