Spain probes X, Meta and TikTok over AI child sexual abuse deepfakes in bid to end platform impunity

Spain will ask prosecutors to probe X, Meta and TikTok over AI child-abuse deepfakes. EU and Ireland step up scrutiny as Spain readies an under-16 ban and tougher platform duties.

Categorized in: AI News PR and Communications
Published on: Feb 18, 2026
Spain probes X, Meta and TikTok over AI child sexual abuse deepfakes in bid to end platform impunity

Spain targets X, Meta, and TikTok over AI-generated child sexual abuse material: a PR and Comms playbook

Spain will ask prosecutors to investigate X, Meta, and TikTok for potential criminal offences tied to the generation and spread of AI-driven child sexual abuse material. The move aims to protect "the mental health, dignity and rights of our sons and daughters" and end the "impunity" of major platforms, according to Prime Minister Pedro SΓ‘nchez.

An expert report flagged possible criminal liability around deepfakes and manipulated images that sexualise minors, and warned platforms enable "massive dissemination" with speed and opacity that obstruct detection, enforcement and takedowns. The cabinet will formally request the attorney general to investigate and, where applicable, prosecute.

This comes alongside plans to ban under-16s from social media in Spain and to impose new duties on platforms for hateful and harmful content. It also follows an EU probe into X after claims its AI chatbot, Grok, produced sexualised images of real people, including minors. Ireland's Data Protection Commission has opened a "large-scale" inquiry into Grok's generative features to determine GDPR compliance.

Platforms have responded by stressing zero tolerance for child sexual exploitation and stating that such content is removed when found. Spanish officials counter that algorithms must not "amplify or protect" digital sexual violence against children, and that enforcement will intensify.

Why this matters for PR and Communications

  • Regulatory risk is now communications risk. Spain's action and Ireland's GDPR investigation signal tighter scrutiny of AI outputs, moderation workflows and transparency.
  • The Digital Services Act raises stakes on illegal content, risk assessments, and crisis protocols for very large platforms. Expect fast information requests and public accountability moments.
  • Media, policymakers, and advertisers will judge your brand on speed of response, evidence of control, and willingness to cooperate with authorities.
  • Cross-border exposure is real: one AI feature can trigger parallel inquiries across Spain, Ireland, Brussels and beyond.

Immediate actions for comms leaders

  • Spin up a cross-functional incident cell (Policy, Trust & Safety, Legal, Product, Engineering, PR). Set 24/7 comms and decision rights now.
  • Inventory every generative feature, model, and prompt interface. Document safeguards, age gates, abuse prevention, and escalation paths.
  • Pre-approve holding lines for regulators, media, and advertisers. Include concrete timeframes, contacts, and next milestones.
  • Publish a clear child-safety position: definitions, detection partners, reporting channels, and average removal times.
  • Run red-team tests for prompts likely to produce sexualised outputs of real people or minors. Log results and fixes; brief regulators proactively if needed.
  • Align legal risk and PR messaging on GDPR lawful bases, DSA duties, and cooperation commitments. No hedging, no jargon.

Messaging principles that hold under pressure

  • Lead with child safety. State what you've done, what failed, and what changes today.
  • Own the system, not just "bad actors." Explain how your product prevents, detects, and removes abuses-then show numbers.
  • Be specific: policies, models, thresholds, reviewer coverage, average time to action, law-enforcement referrals.
  • Cooperate out loud: name the authorities and NGOs you work with, and the cadence of updates you will provide.
  • Avoid defensiveness. If content slipped through, say so, fix it, and set a deadline for the next update.

Risk signals to monitor daily

  • Prompts yielding synthetic sexualised images of minors or real people; any reproducible paths users share.
  • Spike in child-safety reports, takedown queues, or time-to-action; mismatches between AI and human moderation outcomes.
  • Regulatory pings from Spain's attorney general, Ireland's DPC, or the European Commission; platform store policy flags.
  • Advertiser threats to pause spend; civil society briefs; coordinated investigative posts with screenshots.

Scenario plans and approved lines

  • Investigation announced: "We support the inquiry and will provide full access to our policies, detection data, and incident logs. We will publish weekly updates on actions taken and timelines for additional safeguards."
  • Verified policy breach: "We identified a gap that allowed prohibited content. We disabled the feature, removed content, notified authorities, and are deploying additional filters and human review before re-enablement."
  • Regulator data request: "We've acknowledged the request and are delivering datasets, product notes, and risk assessments by [date]. We welcome supervisory guidance and will integrate it into product changes."
  • Advertiser concerns: "Brand safety and child safety are inseparable. We're sharing independent audit findings, enforcing stricter placements, and offering account-level controls effective [date]."

What's next

  • Spain: timeline for the attorney general's probe; details of the under-16 social media ban and enforcement model.
  • EU: further Digital Services Act enforcement on illegal content risk mitigation and transparency reporting.
  • Ireland: outcomes from the DPC's inquiry into Grok's generative functionality and GDPR compliance posture.
  • Copycat measures: watch the UK, France, Greece, and Australia (already banning under-16s) for aligned moves.

Helpful references

Bottom line

Authorities are closing the gap between AI capability and accountability. Treat child-safety risk as a standing incident, show your work publicly, and move faster than regulators expect. That is both the right thing to do and the only sustainable communications strategy.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)