Morgan Freeman cracks down on AI clones of his voice

Morgan Freeman is cracking down on AI voice clones, calling unauthorized mimicry theft. Legal teams should move fast: assert publicity rights, seek takedowns and injunctions.

Categorized in: AI News Legal
Published on: Nov 16, 2025
Morgan Freeman cracks down on AI clones of his voice

Morgan Freeman targets unauthorized AI voice clones - what counsel should act on now

Morgan Freeman is moving against a wave of AI tools cloning his voice without consent or compensation. "Don't mimic me with falseness," he said in a recent interview. "If you're gonna do it without me, you're robbing me."

His legal team has already identified many unauthorized uses and is pursuing action against quite a few of them. The dispute lands amid wider industry friction, including controversy around the AI "actress" Tilly Norwood, created by Dutch actress Eline Van der Velden's studio Particle 6 - a move SAG-AFTRA has condemned as a threat to working actors.

Why this matters for legal teams

Voice is valuable, protectable, and uniquely identifiable. Cloning it without permission triggers multiple causes of action across state and federal law - and the exposure compounds quickly across platforms, vendors, and advertisers.

Likely legal theories on the table

  • Right of publicity (state law): Unauthorized commercial use of name, likeness, and voice. Strong in states like California and Tennessee (see the ELVIS Act), variable elsewhere.
  • False endorsement / confusion (Lanham Act ยง43(a)): Implying approval or association by mimicking a distinctive voice.
  • Copyright-related claims: Use of copyrighted sound recordings to train or generate outputs; potential DMCA claims (including removal of CMI) tied to source material.
  • Unfair competition and consumer protection: Deceptive or misleading AI ads or content.
  • Contract / ToS violations: Breaches tied to platform terms, data licensing, or prior talent agreements.
  • State impersonation/deepfake statutes: Growing patchwork regulating deceptive synthetic media, including voice.

Immediate enforcement playbook

  • Preservation and demand letters: Lock down training data, prompts, model versions, and customer lists.
  • Platform takedowns: Issue DMCA and policy-based notices to hosting sites, app stores, and ad networks.
  • Injunctions: Move fast for TRO/preliminary injunction when misattribution or reputational harm is ongoing.
  • Target the chain: Go after the model maker, distributors, and commercial users running the impersonation in ads, podcasts, or apps.
  • Forum strategy: Pick states with strong publicity rights, favorable precedent, and speed to relief.
  • Damages theory: Blend statutory, actual, corrective advertising, and disgorgement; include fee-shifting where available.

Contract clauses to lock in now

  • AI consent and compensation: No training, synthesis, or cloning of voice without explicit, revocable consent and separate compensation.
  • Use restrictions: Ban transfer, sublicensing, and derivative voice models; prohibit simulated "new reads."
  • Audit and provenance: Access to training data, dataset logs, and downstream customer disclosures.
  • Watermarking and detection: Require detectable signals in outputs and cooperation on takedowns.
  • Indemnity and insurance: Vendor indemnity for AI misuse; verify AI/media liability coverage.
  • Union alignment: Ensure compliance with SAG-AFTRA AI provisions in applicable contracts.

Risk controls for studios, agencies, and AI vendors

  • Data provenance: Document source, license, and scope for every audio file; ban scraping of celebrity voices.
  • Consent verification: Signed releases for any training or cloning; maintain auditable records.
  • Model safeguards: Blocklist signature voices; prompt and output filters; incident response plans.
  • Human review: Pre-release review for ads and sponsored content using synthesized voices.
  • Advertising controls: Prevent misleading endorsements; add clear AI disclosures where appropriate.

Union and policy backdrop

SAG-AFTRA has drawn a hard line on AI uses that replace or impersonate working performers. Their published positions outline consent, compensation, and control over digital replicas.

States are moving too. Tennessee's ELVIS Act explicitly protects voice as part of a person's likeness, signaling tougher consequences for unauthorized cloning. Federal proposals, including the NO FAKES Act discussion draft, point to more activity ahead.

Key takeaways for counsel

  • Map the actors: model developer, dataset provider, distributor, brand, and platform - then pressure each link.
  • Lead with publicity rights and false endorsement; layer in copyright and contract where the facts support it.
  • Move early for injunctive relief; the harm compounds with every stream, impression, and repost.
  • Update boilerplate: AI clauses aren't optional anymore - consent, compensation, controls, and audit rights should be standard.

Freeman's stance is clear: his voice is his asset. If your client's voice is being cloned, treat it like any other high-value IP - defend it quickly and thoroughly.

If your legal team needs a fast primer on AI systems and risks to better pressure vendors and platforms, see these curated resources by job: Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)