AI voice cloning is surging. Liability risks are, too
AI-driven voice cloning is moving into marketing, customer service, and media at a pace many teams didn't plan for. The global voice synthesis market sits around $3.3 billion, but legal frameworks haven't caught up and loss history is thin. Insurers are flagging the gap between adoption and controls.
"The main concern is that this is a new, emerging risk," said Erlisa King, director of product liability at Tokio Marine HCC. "What we're finding is that our insureds aren't necessarily prepared, so they may not have the quality controls in place to mitigate this exposure."
Where the exposure comes from
Voice cloning builds a digital replica of a person's speech from limited audio, mimicking pitch, tone, and cadence. It powers voiceovers, localized content, customer service avatars, and accessibility tools, including speech restoration.
The risk profile cuts across intellectual property, privacy, right of publicity, and fraud. In the US, rules are uneven. California's AI Transparency Act (SB 53) addresses accountability in broad strokes, but leaves voice cloning and biometric voice replication in a gray area. That uncertainty creates room for disputes and coverage friction. Read the bill text.
Usage is spreading across websites, training modules, customer service bots, voicemail campaigns, and digital avatars. "As that expands across platforms, we're going to see more violations: contract violations, potential fraud and related issues," King said.
Early lawsuits show the likely attack paths
Several cases are drawing a line in the sand. In one complaint involving Lovo, Inc., two voice actors allege their voices were cloned and used beyond agreed terms for commercial work on the platform. The core issue: consent scope and unauthorized use.
ElevenLabs, Inc. faces separate suits from voice actors claiming unauthorized use and misappropriation of likeness and publicity rights. Expect more filings as the tech spreads and regulators start to react. Insurers are watching for patterns that will shape wording and underwriting.
Where coverage is responding today
Claims remain modest. To date, most incidents have triggered "advertising injury" or "personal injury" provisions within standard liability forms. Still, brokers are pushing clients to look wider, given the mix of IP, privacy, and reputational exposure.
- Media liability for content and publicity rights disputes
- Cyber liability for data, access, and incident response
- Technology E&O for performance or misuse tied to services
- Privacy liability for biometric data and consent issues
- Crisis management/reputational harm for public fallout
The reputational stakes can be high and move quickly. "Once these claims start to hit, having crisis management coverage becomes important," King said.
Expect sublimits, exclusions, and tighter wording
With limited loss data, carriers are leaning on underwriting discipline and controls. As claims volume grows and legal theories mature, expect policy sublimits or exclusions that call out AI and biometric risks.
"I do see (coverage) expanding in the future," King said. "Where we currently have a cyber sublimit, we may eventually see specific wording addressing AI cloning, imaging and biometrics." Companies using voice cloning as a secondary function (e.g., call centers, marketing) may see sublimits. AI-centric firms may be steered to standalone cyber or tech policies.
What underwriters are asking
- What are your voice cloning use cases and deployment channels?
- Do you obtain clear, written consent for each voice used, with defined scope and revocation rights?
- How is biometric voice data stored, encrypted, segregated, and ultimately deleted?
- What safeguards prevent misuse or unauthorized access (access control, logging, approval gates)?
- Are vendors vetted for watermarking, provenance tracking, and abuse monitoring?
- Do you have takedown protocols and incident response plans (including PR and legal)?
- Are there heightened controls for minors and sensitive use cases?
Risk controls that actually move the needle
- Use vetted third-party providers with contracts that require watermarking, traceability, and abuse detection. Avoid building your own stack unless you can meet those bars.
- Consent management: obtain written consent with clear scope (channels, geography, duration, derivative uses), payment terms, and revocation. Track it like a license.
- Dataset provenance: require vendors to document lawful sourcing of voice data and to bar scraping from restricted platforms.
- Access and change control: role-based access, MFA, encryption at rest/in transit, approval workflows for new voices and scripts.
- Human-in-the-loop: review scripts for legal/brand risk before publication; verify high-risk outputs manually.
- Watermarking and detection: embed identifiers in audio and use detectors to find unauthorized copies.
- Monitoring and takedowns: scan platforms for misuse; pre-draft DMCA and platform-specific takedown notices.
- Contract discipline: standard templates for voice actors and vendors; audit rights; indemnity; liability caps aligned to exposure.
- Special handling for minors: parental consent, extra verification, and hard blocks on sensitive categories.
- Tabletop exercises: simulate a deepfake fraud call or unauthorized campaign; practice legal, PR, and customer comms.
Policy and contract language worth considering
- Explicit consent scope for cloning, editing, and redistribution; no secondary use without renewed consent.
- Prohibited categories (e.g., political, medical, or financial impersonation) and high-risk scenarios.
- Vendor warranties on lawful data sourcing, no scraping from barred sites, and no hidden training reuse.
- Watermarking, logging, and model version tracking requirements.
- Notice and takedown SLAs; cooperation duties during claims or investigations.
- IP and publicity rights ownership and licensing spelled out; clear termination and deletion obligations.
- Indemnification and liability caps that reflect potential media and privacy exposure.
Common claim triggers to rehearse
- Consent breach: a voice is used outside the contract's scope, then syndicated across channels.
- Impersonation fraud: a cloned executive voice authorizes a wire; customers or suppliers rely and suffer loss.
- Misleading ads: an AI voice makes claims that spark false advertising or consumer protection complaints.
- Data incident: stored voiceprints are accessed and sold, leading to privacy suits and regulatory scrutiny.
- Platform misuse: a partner's tool allows unauthorized cloning of minors or celebrities, dragging your brand in.
Broker checklist to reduce surprises
- Map client use cases and data flows; flag high-risk touchpoints.
- Run a coverage gap analysis across GL, media, cyber, tech E&O, and privacy forms.
- Scrub exclusions: IP, biometric, contract, knowledge of falsity, and intentional acts.
- Right-size limits and sublimits; consider crisis management and reputational harm.
- Pre-negotiate panel vendors (PR, takedown, forensics, privacy counsel).
- Coach clients on consent, contracts, watermarking, monitoring, and minors.
The bottom line
Voice cloning sits at the intersection of IP, privacy, publicity rights, and fraud. The law is still forming, but expectations are clear: consent, control, and accountability. As King summed it up, "We all have a role to play. The goal is to keep these exposures under control and, ideally, prevent incidents from becoming claims at all."
Building team skills to manage AI risks effectively? See practical training by job role here: Complete AI Training - Courses by Job.
Your membership also unlocks: