Deepfakes enter the cyber policy: What insurance pros need to know about new coverage
Deepfake audio and video have moved from novelty to operational risk. Coalition has expanded its cyber policies to cover certain AI-driven incidents, including reputational harm, and bundled response services to help clients contain damage fast.
What's actually covered
Coalition says it now responds to incidents where a third party uses AI to create or manipulate video, images, or audio that falsely depicts a company's executives or staff, or misrepresents a company's products and services.
Key elements include:
- Coverage for reputational harm tied to deepfakes
- Forensic analysis to validate manipulation and trace sources
- Legal support for takedowns and removal
- Crisis communications to manage stakeholder trust and media
- Continuation of social engineering/fraudulent transfer coverage where applicable
Translation for brokers and underwriters: this isn't just loss reimbursement. It's a response package designed to shorten the half-life of a fake before it spreads.
The claims reality (for now)
Coalition reports deepfakes are a small slice of current claims; around 98% don't involve advanced AI. Attackers still get plenty of wins through the usual suspects: exposed VPNs, unpatched software, and basic phishing.
When deepfakes do show up, voice cloning and text impersonation of CEOs or finance leaders are the common plays. These attacks are targeted, designed to blend into existing approval workflows, and convincing enough to bypass well-trained staff.
12-24 month outlook
Expect more AI in fraud and business email compromise as tools get cheaper and easier to use. Identity verification provider ID.me reports rising attempts to bypass controls with AI and synthetic media, lowering the barrier for less sophisticated criminals.
If you need broader context on social engineering and losses, the FBI's Internet Crime Complaint Center data is a useful benchmark for client conversations. See IC3's annual report.
Underwriting implications
- Controls > storytelling: Require payment controls (dual authorization, call-back to a verified number, positive pay), MFA everywhere, vendor change verification, and strict out-of-band approvals for urgent requests.
- Impersonation risk: Assess public exposure of executives (earnings calls, podcasts, social media), availability of clean voice/video samples, and media monitoring capability.
- Email domain hygiene: Confirm DMARC enforcement (p=reject), SPF, DKIM, and lookalike domain monitoring.
- Access posture: Evaluate VPN hardening, patch cadence, EDR coverage, and privileged access management; deepfakes often ride the same access paths as phishing.
- Response readiness: Does the client have takedown vendors on speed dial, a PR firm with crisis chops, and legal templates for rapid platform notifications?
Policy wording checklist
- Definition of "deepfake/synthetic media": Clear language that includes AI-created or AI-manipulated audio, video, and images, plus text-based impersonation where relevant.
- Triggers for reputational harm: Specify what evidence qualifies (reach/engagement metrics, brand safety reports, SEO impact, revenue correlation).
- First- vs. third-party: Spell out sublimits for crisis comms, forensics, takedowns, and business interruption (if applicable). Clarify overlap with Media Liability and D&O.
- Social engineering: Verify fraudulent transfer coverage conditions (verification steps, call-back protocols) and any carve-backs for deepfake-enabled impersonation.
- Exclusions and carve-backs: Check war/hostile acts language, prior knowledge, sanctions. Look for carve-backs where deepfakes are used in criminal fraud.
- Panel requirements: Approved IR, legal, and PR vendors for fast engagement; pre-approval processes for urgent takedowns.
Broker playbook: fast client questions
- How do you verify executive or payment requests made over voice/video? Is there a hard rule for out-of-band confirmation?
- Who owns takedowns when a fake goes viral on social platforms? Do you have platform contacts and legal templates ready?
- What training prepares staff for voice clones and AI-based chat impersonation, not just phishing emails?
- Do you monitor for lookalike domains, spoofed social profiles, and unauthorized brand ads?
- What's your maximum single payment exposure without additional approvals? Can it be lowered?
Claims prep: what to do in the first hour
- Freeze the flow: Halt pending payments; lock related accounts; notify banks and payment processors.
- Preserve evidence: Capture URLs, platform IDs, hashes, call recordings, emails, and logs. Don't alter source files.
- Engage the panel: Contact incident response, legal, and PR per policy instructions. Start takedowns and platform abuse reports immediately.
- Control the message: Issue a brief, factual notice to employees, customers, and key partners; provide a verified channel for updates.
- Notify the carrier: Early notice speeds approvals for vendors and spend.
Pricing and portfolio view
- Frequency is low, severity can spike: Most losses still stem from basic controls failures; deepfakes add speed and credibility to fraud when they hit.
- Cap volatility with sublimits: Consider sublimits for reputational harm and crisis comms, with options to buy up for executive-heavy or consumer brands.
- Controls-based credits: Reward clients with enforced DMARC, executive media policies, strict payment verification, and pre-contracted takedown/PR support.
Bottom line for insurance pros
Deepfake coverage is becoming standard in cyber, but it's only as good as the client's verification habits and response muscle. Push the basics, add media monitoring and takedown capability, and make sure policy language keeps pace with how fakes spread.
If your team needs practical upskilling to evaluate AI risks and controls, explore role-based learning paths here: AI courses by job.
Your membership also unlocks: