Insurance Market Struggles to Match AI Risk Exposure
AI fraud is widening risk across fraud, operational resilience, intellectual property, privacy and directors-and-officers liability, according to Aon's AI risk report. The insurance market is responding mainly through policy wording changes and selective products rather than through a broad new class of cover.
The timing matters. AI use is spreading faster than companies are building mature controls. McKinsey found that 88% of survey respondents reported regular AI use in at least one business function in 2025, up from 78% a year earlier-even as most organizations remained primarily in pilot phases.
Security concerns are slowing adoption of more advanced systems. In a separate McKinsey survey, nearly two-thirds of respondents said security and risk concerns were the main barrier to scaling agentic AI, and active mitigation lagged perceived relevance across nearly every major risk category.
Five Categories of Exposure
Aon organizes AI-related exposure into five areas:
- Fraud and social engineering
- Compromise of models and data pipelines
- Dependence on third-party AI services
- Shadow AI
- Legal or reputational risk
Shadow AI remains a significant unmanaged channel. Netskope found that 47% of generative AI users still use personal AI apps for work, down from 78% a year earlier but still leaving substantial exposure.
How Insurers Are Responding
Insurers are taking three main approaches. They are applying AI-related exclusions or clarifying endorsements case by case, adding affirmative AI cover to existing cyber or liability policies, or offering standalone AI products with narrower scope.
Aon names offerings from Armilla, Munich Re's AiSure, AXA XL, Vouch and others, but says capacity remains limited and adoption will likely be slower than earlier digital-risk transitions.
Market demand exists but supply constraints are real. In a survey of 600 corporate insurance decision-makers across six major markets, more than 90% said they wanted coverage for generative AI-related risks. Yet the same group said current insurability challenges remain significant in the short term.
Governance Now Part of Underwriting
Directors-and-officers underwriters are paying closer attention to board oversight, public disclosures, risk registers, model testing and third-party controls when assessing AI-related exposure.
This emphasis aligns with formal EU compliance deadlines and governance frameworks many U.S. organizations use to structure AI controls. The EU AI Act entered into force on August 1, 2024, became applicable in stages from February 2, 2025, and will be fully applicable on August 2, 2026.
In the United States, NIST's AI Risk Management Framework is designed to help organizations manage AI risk, with a generative-AI profile meant to identify risks unique to those systems.
The Underwriting Reality
AI risk is already being underwritten, but often through endorsements, wording scrutiny and capacity limits rather than through broad, standardized cover. For enterprise buyers, the insurance structure matters as much as the risk framing.
The gap between demand and supply suggests that standardized AI insurance products remain months or years away. Underwriters are managing exposure through incremental changes to existing policies while building the data and experience needed for dedicated products.
Your membership also unlocks: