AI Is Redefining Cyber Insurance: What Carriers and Brokers Need to Change Now
AI adoption on both sides of the fence-enterprises and attackers-is reshaping cyber insurance demand and design. For SMEs, AI risk is now the second-biggest reason to buy a cyber policy, cited by 35.8% of 2,054 respondents. Only broker guidance ranks higher at 39%.
There's a growing gap between what clients think "AI cover" means and what policies actually address. Many standard forms don't speak to AI directly, which sets the stage for tough claims conversations.
Market Momentum, Real Losses
Forecasts point to a bigger market, fast. One major outlook projects global cyber premiums rising from roughly $16-$20 billion last year to $30-$50 billion by 2030.
Loss data and incident response teams are seeing more AI-driven events. Deepfakes are boosting phishing by attacking visual and auditory trust-voice, video, and convincingly "human" cues-making social engineering harder to spot and easier to scale.
Coverage Gaps: AI Attacks vs. AI Errors
Here's the core tension. Most policies will cover losses from AI-powered attacks-think deepfake-enabled wire fraud or automated credential stuffing. But many exclude a company's own AI errors: incorrect outputs, hallucinations in chat, biased decisions, or IP leakage from misused tools.
One leader framed it well: if another driver hits your Formula 1 car, you're covered; if you blow the engine with a bad tune, you're not. That's roughly where many wordings sit today.
Underwriting Has Turned the Corner
From "Tell Us" to "Prove It"
Carriers are moving from self-attestation to evidence-based underwriting. Baseline cyber hygiene isn't just listed-it's verified. Expect hard proof of:
- Enforced MFA for cloud and privileged access
- Comprehensive, tested backups with recovery evidence
- Vulnerability management with documented SLAs
The new wrinkle is AI risk. Insurers worry about systemic, correlated loss tied to shared models, platforms, and agent frameworks. Some carriers are exploring AI exclusions. Others are underwriting AI risk explicitly based on governance and security maturity.
AI Governance Signals Underwriters Want to See
- Clear AI use policy for employees and contractors
- Tool vetting process (security, privacy, licensing, data residency)
- Guardrails to prevent sensitive data exposure in chat and prompts
- Controls for model outputs (human-in-the-loop for critical decisions)
- Logging and audit trails for AI-assisted actions
- Vendor risk reviews for model providers and LLM platforms
- Incident response plans that include AI-enabled fraud and deepfake playbooks
If you want a public reference point for program structure, the NIST AI Risk Management Framework is a solid benchmark to cite in underwriting discussions. View NIST AI RMF
Supply Chain: Shared Exposure, Shared Standards
Insureds need sound internal controls-and they need their vendors aligned too. MFA, password managers, and incident response basics shouldn't stop at the org chart. They need to flow down to suppliers.
Insurance can accelerate that maturity. Many carriers now pair coverage with pre-breach services: security partners, threat intel, and advisory support. That reduces loss frequency and sharpens renewal outcomes.
From Passive to Active Insurance
The industry is shifting from annual questionnaires to ongoing validation. "AI-proof" posture checks before binding will become normal. Hybrid models-where verified security posture directly shapes limits, retentions, and terms-will spread.
AI moves faster than traditional renewal cycles. Underwriting and risk engineering have to move with it.
What Brokers and Carriers Should Do Now
Update Your Underwriting Pack
- Evidence-based MFA, backup testing, and vuln management
- AI governance questionnaire: use cases, data flows, vendor stack, guardrails
- Deepfake/social engineering controls: verification steps for finance, HR, and IT
- Proof of security awareness training with AI-specific modules
Clarify Policy Language
- Define coverage for AI-enabled attacks (e.g., deepfakes, LLM-assisted phishing)
- Spell out exclusions for first-party AI errors (hallucinations, biased outputs) vs. covered perils
- Address IP/data leakage via third-party AI tools
- Align social engineering and crime coverage with deepfake scenarios
Tighten Vendor Requirements
- MFA, password management, and incident response plans for key suppliers
- Contractual obligations for breach notification and cooperation
- Evidence of AI risk controls for vendors using shared models or agent frameworks
Raise Claims Readiness
- Playbooks for voice/video deepfakes and executive impersonation
- Out-of-band verification for payments, payroll, and vendor changes
- Fast forensics access and data for AI-related incidents (logs, prompts, outputs)
Practical Guardrails for Insureds
- Vet AI tools before use; document approval and data handling
- Keep sensitive data out of chat unless you control the model and storage
- Use mandatory call-back or secondary verification for any payment or credential request
- Add AI-focused training: deepfakes, prompt hygiene, and data exposure risks
- Log AI interactions tied to critical business processes
- Put a human in the loop for high-impact decisions driven by model outputs
Bottom Line
Client anxiety around AI is real-and justified. Demand is rising, losses are surfacing, and the policy gap is visible. The winners will be the insurers and brokers who verify controls, speak clearly about coverage, and help clients build AI governance that stands up under claim.
If you need to uplevel client education on AI policy, governance, and day-to-day use, consider credible training resources. A curated starting point by role is here: AI courses by job.
Your membership also unlocks: