OpenAI's Sora turns the dead into deepfakes, sparking a legal and ethical backlash

Sora 2 lets users create short videos of deceased public figures, spurring viral use and protests. Lawyers face unsettled Section 230, publicity rights, and monetization risks.

Categorized in: AI News Legal
Published on: Oct 18, 2025
OpenAI's Sora turns the dead into deepfakes, sparking a legal and ethical backlash

Sora 2's deepfakes of the dead: what lawyers need to know now

OpenAI's Sora 2 makes 10-second, high-quality videos from text prompts and, crucially, lets users depict famous deceased figures. That design choice has driven massive engagement and equally loud backlash. Families of public figures are protesting offensive, viral clips. OpenAI has paused depictions of Martin Luther King Jr while it "strengthens guardrails," and says representatives of "recently deceased" public figures can request blocks.

For legal teams, the core questions are liability, publicity rights, and product design choices that may count as encouragement. Below is a concise brief on risk, plausible claims, and practical steps for clients building, using, or moderating generative video.

Section 230: unsettled shield for AI video platforms

Whether Section 230 covers outputs of generative AI remains unresolved. If covered, platforms may avoid liability for user-generated videos; if not, plaintiffs will test product design, moderation, and promotion theories. Expect uncertainty until appellate courts-or the supreme court-squarely address AI outputs.

Reference: 47 U.S.C. ยง 230 (Cornell LII).

Libel, deception, and the right of publicity

U.S. libel law protects the living, not the dead. That's why Sora 2 requires consent for living persons. The dead, however, can still be protected by postmortem right of publicity statutes, which vary by state and typically focus on commercial use and false endorsement.

Key states include California (statutory postmortem rights) and New York (postmortem rights added in 2021). Commercialization, ad placements, or monetized content using a deceased person's likeness are higher-risk than noncommercial, clearly labeled entertainment. Watermarks help on deception but do not eliminate publicity claims.

Reference: Cal. Civ. Code ยง3344.1 (postmortem rights).

Where plaintiffs may press

  • Inducement and design: A homepage or feed dominated by "historical figure" clips can be framed as encouraging the practice.
  • Training and output targeting: If a model is tuned to reproduce specific deceased personas, plaintiffs may argue foreseeability and design intent.
  • False endorsement and unfair competition: Monetized channels using deceased celebrities to drive sales or sponsorships invite claims.
  • Consumer protection: Misleading ads or inadequate labeling raise state AG and FTC risk.
  • Emotional distress and dignitary harms: Estates may test novel theories where clips are especially degrading or exploit tragic events.

User risk vs platform risk

Platforms will argue entertainment use, visible watermarks, and content policies reduce their exposure. Users who build audiences on deceased-celebrity content, then monetize, face a more direct path to liability under state publicity laws and false endorsement theories. The watermark does not cure commercial exploitation.

Policy drift and "Whac-A-Mole" guardrails

OpenAI has moved from permissive defaults to selective pauses (e.g., MLK Jr) and "recently deceased" opt-outs. It also shifted to opt-in for certain copyrighted properties after infringement complaints. Expect continued iteration until courts or legislation clarify the boundaries. In the interim, product and policy choices will be scrutinized as evidence of intent and control.

Action items for in-house and outside counsel

  • Update TOS and product policies: Require consent for living persons, restrict depictions of deceased public figures absent estate approval for any commercial or sponsorship context, and mandate clear labeling.
  • Implement proactive filters: Blocklists for high-risk names; friction for prompts involving tragic events; rate limits and repeat-offender penalties.
  • Tune distribution: Demote shock content; require warnings for sensitive historical subjects; preserve audit logs for enforcement and litigation.
  • Monetization controls: Disable tipping, ads, and affiliate links on persona-based content without documented rights. Add revenue-sharing or rights-clearance workflows where feasible.
  • Notice-and-action: Stand up a fast channel for estates to request removals or blocks; verify authority; publish transparency reports.
  • Labeling: Keep visible, durable watermarks and on-platform context panels that identify synthetic media, source, and policy status.
  • Jurisdiction mapping: Track postmortem rights in NY, CA, TN, and other states; geofence high-risk content when necessary.
  • Insurance and reserves: Review media liability coverage; model exposure where creators can monetize persona content.

Guidance for creators and brands

  • Do not monetize persona-based content without estate permission. This includes ads, paid sponsorships, paywalled content, affiliate links, and cross-posting to monetized platforms.
  • Avoid depictions that could imply endorsement, especially for products and political topics. Disclaimers help but are not decisive.
  • Keep receipts: retain prompts, outputs, timestamps, and distribution decisions to evidence good-faith use and labeling.
  • If commissioned by a client, treat estate permission like any other IP clearance. Push for written approvals and indemnities.

Signals to watch

  • Appellate rulings on whether Section 230 applies to AI-generated content.
  • State-level updates to postmortem publicity statutes and deepfake laws, especially around political speech and ads.
  • Platform policy shifts from opt-out to opt-in for deceased personas; standardized estate registries or rights exchanges.
  • FTC actions on deceptive endorsements using synthetic media.

Upshot

Until courts weigh in, the safest path is conservative: limit distribution, block monetization, and require permissions when likeness drives value. Product choices will be read as intent. Strong records, clear labeling, and responsive takedown processes are your best defense while the law catches up.

If your legal team needs concise training on AI video risks, governance, and policy design, see: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)