Character.AI Pulls Disney Chatbots After Cease-and-Desist Amid Growing Studio-AI Friction

Character.AI pulled Disney bots after a cease-and-desist over copyright, trademark, and safety risks. Removals continue as it eyes licensing and stricter safeguards.

Categorized in: AI News Legal
Published on: Oct 02, 2025
Character.AI Pulls Disney Chatbots After Cease-and-Desist Amid Growing Studio-AI Friction

Character.AI pulls Disney characters after cease-and-desist: legal risks and next steps for platforms

On Oct. 1, 2025, Character.AI began removing Disney characters from its chatbot platform after receiving a cease-and-desist letter alleging copyright and trademark infringement. The letter cited impersonations of Elsa, Moana, Peter Parker, and Darth Vader that replicated each character's "essence" and backstory, creating the impression of Disney's endorsement.

Character.AI said the characters were user-generated and that removals are underway. The company also signaled an interest in licensing, stating a goal to provide rights holders tools to build official, monetizable experiences.

What Disney is asserting

Copyright. Fictional characters can be protected when they are sufficiently distinctive. Chatbots recreating protected characters and their narrative traits can be treated as derivative works, even without identical visuals.

Trademark. Impersonations that suggest sponsorship or endorsement raise likelihood-of-confusion and potential dilution issues for famous marks. References to "essence, goodwill, and look and feel" point to brand indicia beyond names and logos. Disclaimers are weak if the experience reads as an official portrayal.

Safety concerns. Disney flagged reports of inappropriate conversations, heightening risk of tarnishment claims and consumer-protection scrutiny-especially where minors may interact.

Why "user-generated" is not a shield by itself

DMCA ยง512 can provide safe harbor for copyright if the platform maintains a designated agent, posts and enforces a repeat-infringer policy, and acts expeditiously on proper notices. Evidence of "willful blindness" or inadequate takedown processes can jeopardize that protection. See the U.S. Copyright Office's overview of safe harbors for practical requirements: copyright.gov/section512.

Trademark claims are different. There is no DMCA-style safe harbor for trademarks. Contributory liability can attach where the platform knows of specific infringement and continues to supply services that enable it. After Jack Daniel's v. VIP Products (2023), use of famous marks as source identifiers receives less First Amendment leeway; "parody" labels and disclaimers alone won't cure likely confusion. See the opinion: supremecourt.gov.

Right of publicity. If chatbots mimic a performer's distinctive voice or persona (e.g., a "soundalike"), state law claims (such as in California) are possible, independent of copyright.

Context: litigation is intensifying

In June, Disney and Universal sued Midjourney over alleged infringement tied to well-known franchises, and Warner Bros. Discovery later joined with claims involving Scooby-Doo and Superman. The Character.AI dispute fits the same pattern: platforms enabling character impersonation face parallel copyright, trademark, and publicity pressures.

Action checklist for in-house counsel at AI platforms

  • Block impersonation at the source. Prohibit "play as" experiences for famous characters and celebrities. Train models and guardrails to reject prompts like "Answer as [Character]" or "Speak in [Character]'s voice."
  • Curate names and identifiers. Maintain denylists for protected names, aliases, and distinct traits; include phonetic and typographical variants. Log and review attempted circumventions.
  • DMCA compliance, documented. Appoint and publish a registered agent, codify a repeat-infringer policy, and keep auditable records of notice, takedown, and termination decisions.
  • Trademark notice-and-action. Build a process for brand-owner reports with expedited removal, even absent a statutory safe harbor. Track recidivist users and apply escalating enforcement.
  • Labeling with enforcement, not just text. Disclaimers help, but they are not a defense to confusion. Pair labels with functional constraints that prevent endorsement signals (names, avatars, catchphrases, and signature voices).
  • Safety and youth protections. Age-gate sensitive content; restrict sexual or violent dialog tied to children's properties. Document how safety systems reduce tarnishment risk.
  • Rights-owner partnerships. Offer verified "official" experiences with pre-approval controls, creative guardrails, and revenue sharing. Distinguish these with clear badges and stricter moderation.
  • Creator terms with teeth. Ban impersonation, require warranties on rights, and enable fast suspension on credible reports. Provide an appeal path with audit logs.
  • Voice and likeness filters. Detect and block soundalike synthesis tied to identifiable performers. Use similarity thresholds with human review for edge cases.
  • Train staff. Give Trust & Safety and developer teams clear playbooks for copyright, trademark, and publicity issues, including triage timelines and escalation criteria.

Considerations for rights holders

  • Bundle claims. Combine copyright, trademark, and publicity theories in demand letters. Include examples showing confusion, explicit impersonation, and any unsafe dialog.
  • Preserve evidence. Capture timestamps, prompts, responses, and UI elements implying endorsement. Request logs where available.
  • Set terms for licensing. If allowing official experiences, require content controls, age gating, revenue terms, and audit rights.

What to watch next

Expect more disputes over character mimicry, "look and feel," and soundalike voices. Courts will continue to test the limits of expressive-use defenses against clear signals of sponsorship, especially for famous marks and youth-focused IP.

If your legal team is building AI governance and needs structured upskilling, explore curated training by job role: Complete AI Training - Courses by Job.