Gen AI risk is scaling. Insurance demand is already here
Businesses are pushing Gen AI into products, services, and internal workflows at speed. A Geneva Association survey of 600 corporate insurance buyers across China, France, Germany, Japan, the UK, and the US found 71% have deployed Gen AI in at least one function.
More than 90% want dedicated cover for AI-related exposures, and two-thirds would pay 10%+ higher premiums for it. The signal is clear: demand is ahead of supply.
What buyers fear most
- Cybersecurity: model compromise, prompt injection, data leakage, and AI-enabled fraud.
- Third-party liabilities: IP infringement, defamation, bias/discrimination, and regulatory exposure.
- Operational disruption: outages from model providers, bad outputs driving errors, and broken workflows.
These risks cut across existing lines, blur boundaries, and can correlate across many insureds at once.
The insurability challenge
Verifying exposures and sizing losses is hard. Data is thin, models are dynamic, and dependencies on shared providers create accumulation potential.
Like early cyber, pricing, reserving, and capacity decisions carry model risk. Narrow wordings and exclusions are surfacing, but risk transfer alone won't work without more transparency and controls at the insured.
How the market is responding
Carriers are testing policy extensions and early-stage standalone AI covers. Modular structures are emerging to keep scope clear and adaptable.
Cross-sector partnerships-carriers, reinsurers, model providers, cyber vendors, and law firms-will be needed to close protection gaps and build repeatable underwriting.
Signals from the Geneva Association
Leadership notes the adoption pace is fast while risk clarity lags. The report urges insurers to anticipate buyer demand and shape safe, sustainable use of Gen AI.
It also stresses that Gen AI amplifies existing exposures and creates new ones, pushing carriers to define boundaries and test modular coverage that can evolve with the tech.
The Geneva Association and frameworks like the NIST AI Risk Management Framework can help align controls, wording, and governance.
What insurers should build now
- AI exposure mapping: Segment by use case (customer-facing content, code, decision support, autonomy level), model type, provider dependency, and data sensitivity.
- Underwriting questions that matter: Model oversight (human-in-the-loop), input data controls, output validation, red-teaming, vendor SLAs, and incident response readiness.
- Clear coverage boundaries: Define what is "AI-caused," address silent AI across existing lines, and align exclusions with systemic triggers and legal trends.
- Modular coverage: Separate modules for third-party liability, first-party business interruption from AI services, IP/media, and regulatory defense, with optional risk services.
- Accumulation and scenario testing: Stress events like a major model provider outage, widespread content IP claims, or a universal prompt-injection exploit.
- Pricing levers: Sublimits, waiting periods for service outage, coinsurance on uncertain perils, and credits for verified controls and attestations.
- Claims protocols: Triage for AI-driven incidents, forensic vendors on panel, evidence standards for model behavior, and fast path for takedown/mitigation.
- Data partnerships: Loss data sharing with vendors and reinsurers; use external telemetry to improve frequency/severity estimates.
- Risk services: Tooling for prompt governance, content filtering, watermarking checks, and employee training to reduce noise losses.
- Broker education: Provide simple matrices showing what sits in cyber, tech E&O, media/IP, and the AI module to avoid overlap and disputes.
Product design ideas that fit the moment
- AI liability add-on: Extends tech E&O or media to cover AI-generated outputs, with carve-ins for training-data IP where feasible.
- AI outage BI: First-party coverage for downstream disruption from named AI service providers, with waiting period and capped limits.
- Model risk endorsement: Covers costs to remediate bad outputs (recall, notice, rework) if defined controls were in place.
- Regulatory defense and penalties where permitted by law: Tied to documented AI governance and audit trails.
What buyers will pay for
According to the survey, demand is strong and budgets exist: more than 90% want dedicated AI cover, and two-thirds would pay at least 10% more. Buyers want clarity, fast claims handling, and bundled risk services that reduce incident frequency.
Translate that into value: clear wordings, practical pre-loss controls, and credible incident response.
Execution roadmap for the next 12-24 months
- Launch a modular AI endorsement with tight definitions; pilot with select sectors that have measurable controls.
- Stand up an AI risk committee spanning underwriting, claims, actuarial, legal, and cyber to manage wordings and accumulation.
- Co-develop scenarios with reinsurers and major AI providers; set portfolio guardrails and event limits.
- Bundle risk engineering: governance playbooks, prompt policy templates, vendor due diligence checklists, and training.
- Collect outcome data from every claim and near-miss to improve pricing and capacity rules.
The bottom line
Adoption is ahead of actuarial confidence, but the client need is immediate. Start narrow, price for uncertainty, reward strong controls, and keep modules flexible.
Cyber showed the path from add-ons to standalone. Gen AI is likely to follow that path-through years of data, collaboration, and iterative product design.
Want to upskill underwriting and broker teams on AI use cases and controls? Explore practical learning paths here: AI courses by job.