TikTok flooded with AI videos sexualising minors, report says, linking to Telegram groups sharing child sexual abuse material

Maldita found over 5,200 AI-made clips sexualising minors on TikTok, with comments sending users to Telegram groups. Platforms cite removals, but safety has to be built in.

Published on: Dec 15, 2025
TikTok flooded with AI videos sexualising minors, report says, linking to Telegram groups sharing child sexual abuse material

Report: AI-generated sexualised videos of minors spread on TikTok; links funnel users to Telegram

AI-made videos depicting young girls in sexualised clothing and poses have racked up millions of views on TikTok. A Spanish fact-checking group, Maldita, identified more than 20 accounts posting over 5,200 such clips, with a combined audience of 550,000+ followers and nearly 6 million likes.

Many posts included comment links steering viewers to Telegram groups allegedly selling child sexual abuse material (CSAM). Maldita says it reported 12 of these groups to Spanish police.

What the investigation found

Accounts focused on synthetic videos of young girls in bikinis, school uniforms, or tight outfits. Most clips lacked clear AI labels. Some included a "TikTok AI Alive" watermark, which is applied when still images are animated within the app.

The investigation also says creators monetised the content via TikTok's subscription features, with the platform reportedly taking around 50% under its creator agreements.

Policy backdrop

Governments including Australia, Denmark, and the European Union are enforcing or debating restrictions for users under 16 to reduce online harms. TikTok's rules require AI disclosure and allow removal of content deemed harmful to individuals, including minors.

What TikTok and Telegram say

Telegram stated it scans media on public parts of its platform against known CSAM hashes and removed over 909,000 groups and channels in 2025. It noted that offenders rely on private groups and other platforms' algorithms to grow.

TikTok said it automatically removes 99% of content harmful to minors and proactively removes 97% of offending AI-generated content. Between April and June 2025, TikTok reports removing 189 million videos and banning 108 million accounts. The company says it rapidly suppresses or closes accounts sharing sexual content involving children and reports cases to the United States' National Center for Missing and Exploited Children (NCMEC).

Update: This article includes comments from Telegram and TikTok.

Why this matters for IT and development teams

AI reduces the time and cost to create abusive content. Labeling and watermarking can be bypassed, especially when multiple creation tools are chained. Monetisation features and recommender systems can amplify risk if safeguards lag.

The takeaway: trust-and-safety has to be engineered into upload, comment, recommendation, and subscription flows-not bolted on later.

Practical steps to reduce harm (for product, security, and T&S teams)

  • Verify AI disclosure: use content provenance standards (e.g., signed metadata) and block uploads that strip required markers.
  • Go beyond simple hash-matching: deploy perceptual hashing and synthetic-variant detection to catch AI edits and re-uploads.
  • Layer risk scoring: blend model signals (age cues, pose, clothing context, text/emoji in captions and comments) with heuristics for external-link bait.
  • Throttle growth vectors: add friction on posts/comments that include invite links (e.g., to Telegram), including cooldowns, shadow-limits, or pre-publish review.
  • Subscription safety gates: require heightened verification and human review for paywalled creator content; hold payouts until checks clear.
  • Cluster takedowns: remove content and accounts at the network level-creators, amplifiers, and payment endpoints-using graph analysis.
  • Incident response: maintain audit logs, legal holds, and rapid reporting pipelines to child-safety authorities.
  • Measure what matters: track time-to-remove, repeat-offender rates, link click-through to off-platform groups, and model precision/recall on minors' safety classifiers.

If you encounter suspected CSAM

Do not share or save the content. Report it immediately to your platform's reporting tools and to your national hotline. In the U.S., submit a report to the National Center for Missing & Exploited Children (NCMEC). For platform teams, preserve evidence lawfully and escalate via established trust-and-safety protocols.

Skills and training for teams building with AI

If your org is shipping AI features, upskill your engineers and moderators on safeguards, provenance, and abuse detection. A structured learning path can close the gap fast: Explore current AI courses.

Bottom line

AI has lowered the barrier to produce and spread abusive material. The scale of removals shows action, but the incentives and distribution mechanics still favor bad actors. Ship safety features as first-class product work-across creation, comments, monetisation, and recommendations-and keep iterating as attackers adapt.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide