AI Fuels a 600% Surge in Cyberattacks-and New Liability for Insurers
AI is accelerating cyber risk: more attacks, faster cycles, and new vectors like deepfakes and HR infiltration. Insurers must price for speed, verify controls, and cover AI errors.

From script kiddies to deepfakes: AI supercharges cyber risk for insurers
AI has pushed cyber risk into a new phase: more attacks, faster cycles, and a wider spread of tactics. Infrastructure tied to cyberattacks has surged 600% in nine months, expanding potential loss frequency and aggregation. At the same time, AI's probabilistic errors are creating fresh liability for insureds that deploy it, opening a path for new coverages.
Two themes matter for insurance right now: underwriting needs to adjust to the "three Vs" of AI-accelerated threats, and product design must account for AI's inherent error rate. Below is a field guide for both.
The three Vs reshaping frequency and severity
Industry experts describe the shift as volume, velocity, and variety. Together they amplify exposure across the portfolio, compress response time, and introduce new systemic pathways, including HR and vendor channels.
Volume: low-skill attackers are now autonomous
AI lowers the barrier to entry. Actors who once copied tools ("script kiddies") can now generate convincing phishing, build basic automation, and run large campaigns with minimal skill. One Canadian group deployed a fully autonomous phishing toolkit, part of a broader 600% growth in attack infrastructure in the last nine months.
- Insurance impact: higher base frequency across SMB and mid-market, more small-loss attrition, and faster exhaustion of sublimits.
- Accumulation: identical AI tooling re-used across geographies and industries increases correlation inside a single event window.
- Vendor risk: shared services (email, CRM, HRIS) become amplification points for portfolio-wide loss.
Velocity: reconnaissance and exploitation compress to hours
AI shortens the time between probe and compromise. In one case, credential-stuffing success jumped from ~0.5% to above 50% after models were trained on a decade of leaked credentials. Once an attack path works, it scales instantly.
- Controls to require or rate-credit: phishing-resistant MFA or passkeys, bot management, rate limiting, impossible-travel checks, and breached-password screening.
- Underwriting signal: time-to-detect and time-to-remediate metrics, plus proof of active monitoring for automated abuse.
Variety: deepfakes and HR infiltration
Beyond executive voice clones and BEC, attackers now target HR from both sides. They impersonate employers to extract "equipment fees" from job seekers, and they pose as candidates to gain access as remote workers. Large platforms report thousands of attempts per day, and state-backed groups have used this route to infiltrate Western firms.
- Controls: identity verification in hiring (liveness checks), structured equipment procurement (no reimbursements before identity clearance), and callback procedures on any payment or onboarding change.
- Risk shift: HR moves from back-office to frontline security. Treat it like a payments function.
Why detect-and-respond is no longer enough
With AI accelerating attacks, a reactive model loses ground. Pre-emptive controls such as brand/domain monitoring, rapid takedown of lookalike sites, and strict email authentication (SPF, DKIM, DMARC) should be baseline. Think prevention first, then response.
The insurable risk of AI itself
As companies adopt generative AI for support, sales, and decisions, errors are unavoidable. These systems are probabilistic. Even well-governed models will produce wrong outputs, leading to financial harm or regulatory exposure.
That opens the door for "residual error" coverage-policies that sit alongside governance and absorb the financial impact of inevitable model mistakes. Governance frameworks like the NIST AI Risk Management Framework can anchor underwriting standards and risk selection.
Underwriting checklist for AI-using insureds
- Model inventory by use case, with business owner and risk rating.
- Human-in-the-loop thresholds and escalation paths for high-impact decisions.
- Validation: test sets, drift monitoring, red teaming, and documented error budgets.
- Data lineage: training data sources, consent, IP clearance, and retention limits.
- Guardrails: content filters, safety policies, and hard blocks on restricted tasks.
- User experience: disclosures, confidence scores, and safe defaults.
- Incident playbook specific to AI errors, with reporting SLAs and rollback/kill-switch.
- Vendor risk: contractual indemnities, logs, and audit rights for AI services.
- Evidence: metrics on false positives/negatives and customer harm resolution.
Coverage implications to address
- What qualifies as an "AI error" vs. misuse or negligent operation.
- Treatment of data bias, defamation, or IP infringement from model outputs.
- Interaction with Tech E&O, Media/Privacy, and Crime-avoid silent AI exposure.
- Loss measurement for consequential harm (customer credits, rework, SLA penalties).
- Sublimits, aggregates, waiting periods, and event definitions tied to model version or release window.
Portfolio management and pricing levers
- Frequency updates for AI-driven phishing and automated credential attacks.
- Scenario stress tests: vendor compromise, HR infiltration, deepfake-enabled payment fraud, mass impersonation campaigns.
- Accumulation tracking by shared services and model providers.
- Pre-bind scanning and continuous control validation for large accounts.
Practical controls to incentivize across the book
- Authentication: phishing-resistant MFA or passkeys, SSO, least-privilege with just-in-time access.
- Email and brand protection: SPF/DKIM/DMARC, brand/domain monitoring, fast takedown.
- Automated attack defenses: bot management, rate limiting, credential screening, device fingerprinting.
- Payments and approvals: dual control, voice callback on changes, out-of-band verification, strict vendor onboarding.
- Hiring security: verified identity with liveness, staged access for new hires, no equipment reimbursements before clearance.
- Training: deepfake awareness and response drills for finance, HR, and executive assistants.
What insurers should do next
- Refresh cyber questionnaires to capture the three Vs and HR controls.
- Add pre-bind external scans and require proof of phishing-resistant MFA for best terms.
- Pilot residual error endorsements for insureds with mature AI governance.
- Run portfolio-wide scenarios on vendor and identity-driven events; adjust aggregates.
- Stand up a playbook for AI-related claims data, evidence, and root-cause analysis.
- Educate brokers and clients on practical, verifiable controls that earn credits.
If you need practical training for underwriting, broking, or client education on AI risk and controls, explore role-based programs at Complete AI Training.
The takeaway is simple: AI expands the attack surface and compresses time. As one expert put it, you can't afford to detect and then react-you have to prevent. Insurers that price for speed, verify controls, and build products for AI's error profile will stay ahead of loss trends while creating new value for clients.