Alarming Reality: AI Insurance Risk Forces Major Insurers to Reject Coverage
AI is colliding with the limits of traditional insurance. Major carriers are moving to exclude AI-related liabilities, calling complex models a "black box" they cannot assess or price with confidence. If you lead risk, finance, or operations, this isn't a future problem - it's a coverage gap showing up in policy renewals right now.
The takeaway is blunt: without insurance backstops, AI risk becomes your balance sheet's problem. That changes how you build, buy, govern, and deploy every model across the business.
Why Insurers Are Pulling Back
Underwriters depend on data, patterns, and loss histories. AI breaks that comfort. Models behave in ways that are hard to explain, and failure modes are wide-ranging - from hallucinated outputs to biased decisions to silent drift over time.
Pricing uncertainty is the core issue. If loss frequency and severity can't be estimated, coverage either becomes prohibitively expensive or disappears altogether.
The Systemic Risk That Keeps Actuaries Up at Night
Insurers don't fear one big claim as much as thousands of medium ones hitting at once. Picture a widely used model making the same error across many policyholders on the same day. That's a capital event, not just a claim file.
As one broker put it publicly, a $400 million loss to one company can be handled; 10,000 simultaneous claims from a single AI failure cannot. That's the scenario driving exclusions.
Major Players Are Seeking AI Exclusions
Large carriers - including AIG, Great American Insurance Group, and WR Berkley - are pursuing regulatory approval to exclude AI-related liabilities from corporate policies. This is a significant stance: the firms that usually compete to insure emerging risk are stepping back in unison.
Expect new endorsements that carve out algorithmic errors, automated decisions, model failures, and third-party AI vendor incidents from E&O, cyber, and general liability. Read those endorsements carefully; small wording changes can mean large uncovered losses.
Real Incidents Fueling the Retreat
- Google: An AI Overview falsely accused a solar company of legal issues, triggering a reported $110 million lawsuit.
- Air Canada: The airline was forced to honor discounts a chatbot invented without authorization.
- Arup: Fraudsters used AI voice cloning on a video call to impersonate an executive and steal $25 million.
What This Means for Insurance and Management Leaders
If your business runs on AI - or even relies on vendors that do - assume standard policies may exclude the most material risks. You'll need a tighter operational risk program, thicker contracts, and a financial plan for self-insured losses.
Practical Actions: Close the Gap Before It Closes on You
- Model inventory and criticality
- Catalog every AI system and third-party dependency. Flag revenue-impacting, customer-facing, safety-critical, and compliance-relevant use cases.
- Define materiality thresholds and kill-switch criteria for each critical model.
- Testing, monitoring, and controls
- Pre-release testing for accuracy, bias, security, and data leakage. Red-team high-impact scenarios.
- Always-on monitoring for drift, anomalies, and prompt injection. Keep immutable logs for audit and claims defense.
- Human-in-the-loop for decisions that affect money, safety, employment, credit, or healthcare.
- Vendor risk and contracts
- Demand clear service descriptions, model update notices, uptime/SLOs, and audit rights.
- Negotiate indemnities, caps, sublimits, and definitions that include "algorithmic errors," "automated decisions," and "model outputs."
- Require baseline controls (security certifications, data handling, incident response) and proof of any available insurance.
- Policy review and placement
- Scrutinize all exclusions and endorsements across E&O, cyber, GL, media, and D&O. Seek affirmative AI wording where possible.
- Ask your broker about specialty markets, sublimited endorsements, and manuscript language for specific use cases.
- Coordinate limits and triggers across policies to avoid gaps and stacking issues.
- Capital and alternative risk
- Build reserves for AI incidents. Consider captives, co-insurance, and risk-sharing with vendors.
- Explore parametric covers (e.g., outage events) where loss mapping is clearer than fault.
- Governance and documentation
- Establish an AI policy, approval gates, and an incident response plan with legal, compliance, IT, and risk at the table.
- Map to recognized frameworks to strengthen defenses and claims posture, such as the NIST AI Risk Management Framework and the NAIC AI Principles.
If Coverage Disappears, Your Options
- Self-insure for AI incidents and set aside explicit reserves.
- Implement stronger risk mitigation before scaling AI use cases.
- Pause or narrow high-severity use cases until insurable.
- Accept full financial responsibility for defined AI errors (board visibility required).
FAQ
Which insurers are rejecting AI coverage?
Large carriers including AIG, Great American Insurance Group, and WR Berkley are seeking to exclude AI liabilities.
What incidents are triggering concern?
Lawsuits tied to Google's AI Overview, Air Canada's chatbot errors, and a voice-cloning theft targeting Arup. The pattern: errors propagate fast, and attribution is messy.
Why is systemic risk different from normal claims?
One flawed model can produce identical failures for thousands of companies at once. That overwhelms traditional capital models and reinsurance structures.
What can businesses do without insurance?
Strengthen testing and monitoring, tighten vendor contracts, create AI reserves, and adopt formal governance and audit trails to reduce incident frequency and severity.
Is anyone still offering coverage?
Some specialty markets may offer narrow or sublimited coverage, but large providers are moving to exclude AI risk from standard policies.
Looking Ahead
The insurance retreat is a signal: AI risk is real, scalable, and currently hard to price. Expect stricter underwriting, tighter exclusions, and a push for standards that make losses more predictable.
If you lead insurance or management decisions, treat AI like any other high-severity exposure - control it, contract for it, and fund it. Coverage may return, but only after the industry can quantify the risk with more confidence.
Want to upskill your team on AI risk, governance, and practical controls? Explore curated programs at Complete AI Training - Popular Certifications.
Your membership also unlocks: