Japan Calls on X to Curb AI-Generated Sexualised Images
Japan has asked X to act against the alteration and sexualising of images produced with the platform's AI features. The operator is being pressed to submit a corrective plan, with the government warning it may issue guidance under its AI law if progress stalls.
Minister for AI strategy Kimi Onoda said, "We will expedite discussions while gaining the cooperation of relevant ministries and agencies," following a Cabinet meeting.
Why this matters now
X has seen a spike in AI-generated posts that manipulate photos of real people into sexual content. With tens of millions in Japan using the service, the risk of rapid abuse and viral spread is high.
These fakes can infringe on rights such as publicity and reputation, and create lasting harm even after removal.
Platform response
Led by Elon Musk, X introduced "Grok" last year, enabling users to manipulate images via prompts. The service has drawn scrutiny as fake sexual images circulate widely in multiple countries.
X said, "We take action to remove high-priority violative content, including Child Sexual Abuse Material and non-consensual nudity." For reference, see X's policy on non-consensual nudity: policy link.
Global context
Governments in Britain, Canada, and Malaysia have also flagged the surge in AI-fueled sexualised fakes as a public harm. Cross-border enforcement, reporting standards, and platform-level safety features are now a shared priority.
What government teams can do next
- Require a time-bound remediation plan from X with clear metrics: model-level safeguards, detection rates, removal SLAs, appeal paths, and user reporting effectiveness.
- Set expectations for default protections: stronger prompts filtering, stricter rate limits, and friction for risky actions (e.g., verified consent checks for image edits of real people).
- Push for provenance tools: adoption of content credentials (C2PA) or similar signals so images carry verifiable edit history and source data.
- Establish a rapid takedown lane for authorities and accredited hotlines, with auditable logs and evidence preservation to support investigations.
- Define penalties for repeat violators and require transparency reports specific to synthetic sexual content and non-consensual imagery.
- Coordinate across ministries on victim support, legal clarity (publicity and reputational harms), and cross-border requests.
- Update procurement and funding criteria to favor platforms and vendors that implement watermarking, provenance, and abuse detection by default.
- Run public guidance campaigns on reporting pathways and the legal risks of sharing synthetic sexual images of real people.
Related precedent
Last year, the government also sought changes from OpenAI's video generator "Sora" due to outputs closely resembling Japanese anime. Details on the model are available here: OpenAI Sora.
Bottom line
The ask is simple: stronger safeguards, faster removals, and transparent reporting from X. If progress lags, formal guidance under Japan's AI law is on the table.
For agencies building internal capability on AI policy, risk, and operations, see curated programs by job role: Complete AI Training - Courses by Job.
Your membership also unlocks: