AI Is Now a Pricing Signal: Defend Better, Pay Less
Insurers are rewarding organizations that use AI to strengthen their defenses, while growing more cautious with those whose AI use introduces new risks.
In July 2025, a flaw in McHire, the AI recruiting platform used by McDonald's, exposed a simple truth: speed without security is expensive. The backend for restaurant operators accepted "123456" as both username and password and lacked multi-factor authentication. Security researchers Ian Carroll and Sam Curry found the issue and reported it, but not before the personal data of roughly 64 million applicants was put at risk. That's what unchecked AI adoption looks like in the wild.
This isn't rare. An IBM report notes AI adoption is moving faster than AI security and governance. Last year, 13% of organizations reported breaches tied to AI models or applications, and another 8% aren't sure if they were compromised. Insurers have noticed. Many are tightening policy language, raising premiums, and adding exclusions for certain AI-related incidents. A Delinea survey found 42% of respondents now see AI misuse and liability exclusions in their cyber policies-yet 86% also report discounts or credits for deploying AI-based security tools that harden defenses.
"AI is both a risk and an opportunity," says Nate Spurrier, vice president of insurance and counsel strategy at GuidePoint Security. That duality is now driving underwriting, pricing, and claims handling.
Underwriting has shifted from forms to proof
Carriers are moving past checkbox questionnaires and self-attestations. According to Delinea, 77% now require formal reviews by internal and IT security teams before issuing or renewing coverage, up from 56% a year earlier. Even that's table stakes.
"Leading cyber insurers have moved away from moment-in-time application forms toward continuous assessment of an organization's attack surface and controls," says Michael Phillips, Coalition's head of global cyber portfolio underwriting. Coalition, among others, bundles continuous attack surface monitoring, alerting, and guidance with coverage. The strategy is simple: connect real security posture to pricing and terms to reduce both claim frequency and severity.
As AI spreads across business processes, underwriters also want specifics on how it's used and governed. Who can use AI? For what tasks? What controls exist around data, prompts, and outputs? Is AI a helper for efficiency or a core element of the product you sell?
Coverage language is getting sharper-and trickier
Contracts are being rewritten to define what's covered and what's excluded when AI is involved. Some carriers add affirmative AI endorsements; others use exclusions because claims data is thin and risk can scale fast. That caution can cut both ways.
Phillips warns that blanket "AI-related loss" exclusions can backfire. If attackers used AI anywhere in the kill chain, a carrier could argue a classic ransomware event is out of scope. Another wrinkle: many policies predate generative AI. New AI terms are often layered on top of old wording, which creates blind spots and false confidence.
The fix: read line-by-line, model real scenarios, and pressure test how language would apply across lines. As Spurrier puts it, the time to clarify AI coverage is during renewal and other pre-incident discussions-not during the claim.
How insureds are earning discounts (and avoiding carve-outs)
Premium relief goes to companies that can detect and respond faster. That means EDR/XDR deployed broadly, alerts monitored 24/7, and well-practiced response runbooks. AI that shrinks detection and recovery windows is valued because it reduces losses and downtime.
Insurers increasingly expect multi-factor authentication, strong endpoint coverage, continuous vulnerability management, and privileged access controls. The next wave: AI-powered defenses as a baseline requirement, similar to how MFA and EDR became non-negotiable. Lag behind, and you'll pay for it through higher premiums or narrower terms.
What insurers, brokers, and risk managers should verify now
- Inventory and governance of AI systems: business use cases, data flows, model ownership, access rights, logging, and retention.
- Third-party AI risk: vendor due diligence, pen tests, SOC 2/ISO evidence, MFA requirements, and breach notification SLAs.
- Controls that cut claims: phishing-resistant MFA, EDR/XDR with 24/7 monitoring, email security, PAM, immutable backups, and tested restoration.
- Secure development for AI features: threat modeling, red teaming for prompt injection/data leakage, and gated releases.
- Evidence on metrics: MTTD under 10 minutes for high-severity alerts, MTTR under 1 hour, patching SLAs by criticality, EDR coverage above 95% of endpoints.
- Scenario mapping before renewal: AI-assisted ransomware, data exposure via third-party AI, model tampering, and fraud/abuse via automated interactions.
- Contract clarity: where AI exclusions apply, carve-backs for incidental AI use by attackers, sublimits, panel requirements, and how tech E&O interacts with cyber.
Questions to bring to renewal
- What specific AI-related exclusions or endorsements apply across cyber, tech E&O, media, and crime? Any silent AI exposures?
- Will you recognize independent detection/response benchmarks for pricing credits? Which frameworks or attestations help?
- For third-party AI incidents, how do sublimits, waiting periods, and panel obligations trigger?
- What evidence is most persuasive for improved terms-external scans, SOC reports, incident logs, tabletop results?
Why this matters now
AI is embedded in attacks and defenses. Pricing, retentions, and wording will track to measurable controls, not promises. If your insureds can prove faster detection, disciplined response, and tighter AI governance, they'll see better outcomes-both during underwriting and at claim time.
Further reading
Helpful internal resources
Your membership also unlocks: