Spain probes X, Meta, and TikTok over AI-generated child abuse content: what PR leaders need to do now
Spain will investigate X, Meta, and TikTok over the distribution of child sexual abuse materials, including AI-generated deepfakes, the government announced Tuesday. State prosecutors are set to examine whether platform tools and policies enabled creation and spread of illegal content while allowing offenders to avoid detection.
Spanish Prime Minister Pedro Sánchez said the alleged content endangers young people and called for accountability. "These platforms are jeopardizing the mental health, dignity and rights of our sons and daughters," he posted. "The state cannot allow this. The impunity of the giants must end."
What's under investigation
The government plans to invoke Article 8 of the Organic Statute of the Public Ministry to request a formal probe. Officials are examining the potential criminal liability tied to the generation and dissemination of sexual content and child sexual abuse through deepfakes and manipulated images that degrade victims' dignity.
A recent report cited by officials argues social platforms enable rapid creation and distribution of offensive content that evades detection and prosecution. Meanwhile, the platforms continue to operate and profit at scale, intensifying public and regulatory scrutiny.
The broader backdrop
The Spanish move follows raids by French authorities on X's Paris offices over similar concerns. X denies wrongdoing. The platform recently integrated Grok AI from xAI, while TikTok and Meta offer their own AI features across Facebook, Instagram, Messenger, and WhatsApp.
The issue touches free speech debates in the EU and the United States, as well as data protection law. Ireland's Data Protection Commission is involved in the European Commission's inquiry into X over alleged deepfake generation using Grok, including sexualized images of real people and children, to determine compliance with EU rules on personal data and risk controls. See the Ireland DPC and the EU's Digital Services Act for context.
Why this matters for PR and communications
- Reputational risk spikes: child safety is a zero-tolerance issue for the public, press, employees, and partners.
- Regulator timelines are compressing. Expect fast requests for information, public statements, and corrective actions.
- Advertisers and creators will reassess platform and brand alignments. Silence or vague statements invite backlash.
- AI features are now part of the risk surface. Comms teams need fluency in how these tools are governed and audited.
Immediate actions for comms teams
- Stand up a cross-functional incident cell with Legal, Trust & Safety, Policy, Product, and Data teams. Meet daily until risk is contained.
- Audit all AI features and content flows tied to image generation, recommendations, and reporting. Document controls, gaps, and timelines to fix.
- Prepare a clear position on child safety, deepfakes, and enforcement. Include what's changing now, what's next, and who's accountable.
- Draft a Q&A covering law enforcement cooperation, user reporting pathways, model safeguards, and takedown speeds.
- Create pre-approved statements for regulators, enterprise clients, advertisers, and press. Customize by stakeholder and region.
- Set escalation thresholds: when to pause features, freeze ads, or restrict API access. Communicate triggers and review cadence.
- Publish near-term transparency updates (e.g., volume of reports, removals, median response times) with dates and owners.
- Coordinate with child safety NGOs and hotlines to strengthen referrals and survivor-first language.
Messaging guardrails to avoid unforced errors
- Don't lean on clichés like "we take this seriously." Show receipts: concrete actions, dates, and measurable targets.
- Avoid framing this purely as "bad actors." Acknowledge system fixes: detection, reporting, model safeguards, and human review coverage.
- Be specific about cooperation with authorities and compliance with EU requirements. Share the process, not internal spin.
- Use clear, humane language. Center victims' safety and dignity. Cut technical jargon unless it explains an action.
If your brand relies on X, Meta, or TikTok
- Build a brand safety matrix with "pause/resume" criteria tied to child safety incidents and regulatory findings.
- Set creative and influencer guidelines that prohibit AI-generated likenesses of real people without consent.
- Pre-draft responses for media and clients if your ads or content appear near harmful material. Include evidence of monitoring.
- Track public sentiment and policy announcements daily. Adjust spend, placements, and messaging in real time.
Preparing leadership
- Brief executives on the legal basis of the Spanish probe, the French raid on X, and ongoing EU inquiries.
- Rehearse tough questions: model training data, misuse prevention, error rates, escalation pathways, and accountability.
- Align internal and external talking points. Inconsistency will surface and damage credibility fast.
Build team capability
If your comms team needs structure for AI policy, crisis playbooks, and stakeholder messaging, explore the AI Learning Path for Public Relations Specialists for practical frameworks and tools.
Bottom line
Spain's action signals a harder regulatory line on AI, platforms, and child safety. PR leaders should move first, communicate clearly, and prove change with data and deadlines.
Your membership also unlocks: