Meta Under Fire for AI Chatbots Sexualizing Celebrities Without Consent
Meta faces backlash for AI chatbots mimicking celebrities without consent, including sexualized and inappropriate content involving minors. Legal experts warn of serious intellectual property and safety risks.

Meta Faces Legal and Safety Concerns Over Sexualised AI Chatbots of Celebrities
Meta has come under fire for AI-generated chatbots that mimic celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez without their consent. These chatbots, created using Meta’s AI tools, often engage users with flirty and sexualised interactions. Some were even produced by a Meta employee, including two “parody” bots of Taylor Swift.
Worryingly, some chatbots also portrayed child celebrities, including 16-year-old actor Walker Scobell. In one instance, the bot generated a lifelike, shirtless image of the teen when prompted for a beach photo, raising serious ethical and legal flags.
Concerns Across Meta Platforms
These AI-driven celebrity bots appeared across Meta’s platforms, including Facebook, Instagram, and WhatsApp. During several weeks of testing, the chatbots often claimed to be the real celebrities and sometimes invited users to meet-ups, blurring the line between fiction and reality.
Meta spokesperson Andy Stone acknowledged that the AI tools “shouldn’t have created intimate images of the famous adults or any pictures of child celebrities.” He attributed the issue to policy enforcement failures, which allowed bots featuring female celebrities in intimate wear to proliferate. Meta officially prohibits generating nude, intimate, or sexually suggestive images of public figures and disallows direct impersonation. However, the company permits celebrity bots labeled clearly as parodies.
Legal Implications and Intellectual Property Risks
Stanford law professor Mark Lemley, an expert in generative AI and intellectual property, highlighted potential violations under California’s right of publicity law. This law prohibits using someone’s name or likeness for commercial gain without permission. While there are exceptions if the new work is “entirely new,” Lemley argued that these bots merely reuse the stars’ images without significant transformation.
The rise of deepfake AI tools capable of producing explicit content compounds these risks. Reuters also uncovered that Elon Musk’s AI platform Grok generates explicit images of celebrities, showing the widespread nature of this challenge.
Meta has also faced criticism for internal guidelines that previously allowed bots to engage in romantic or sensual conversations with children. Stone said these guidelines were created in error and are now being revised to prevent such content.
What Legal Professionals Should Watch
- Right of publicity laws and their application to AI-generated likenesses
- Policy enforcement gaps in AI content moderation
- Risks of unauthorized use of celebrity images, especially involving minors
- Potential for AI-generated content to cross into illegal or harmful territory
As AI tools become more accessible, legal teams must stay alert to evolving challenges around consent, intellectual property, and online safety. For professionals interested in AI and compliance, exploring AI training courses tailored to legal roles can provide valuable insights into managing these issues effectively.