Japan to X: fix Grok's explicit image problem or face legal action

Japan is probing X's Grok over inappropriate image generation and may pursue legal steps. PR teams should pause AI image edits, tighten policies, and ready crisis lines.

Categorized in: AI News PR and Communications
Published on: Jan 18, 2026
Japan to X: fix Grok's explicit image problem or face legal action

Japan probes X's Grok AI over inappropriate image generation: What PR and comms teams need to do now

Japan has opened an investigation into Grok, the AI service tied to Elon Musk's X, after reports it could generate inappropriate images. The Cabinet Office asked X Corp to make immediate fixes and flagged that legal measures are on the table if the situation doesn't improve.

xAI says it has rolled out tweaks to stop users from editing images of real people in revealing clothing and has placed location-based blocks where such content is illegal. Economic Security Minister Kimi Onoda noted the government has yet to receive a response from the company and will consider "every possible option" if issues persist.

The UK and Canada are moving ahead with their own probes. Malaysia and Indonesia have temporarily blocked access to Grok over explicit image creation, and global pressure is building as officials call out the risk of sexualised images of women and minors.

Why this matters for PR and communications

  • Reputation risk: Association with unsafe image outputs will draw scrutiny from media, NGOs, and regulators.
  • Regulatory exposure: Multiple jurisdictions are signaling enforcement, including potential legal action.
  • Platform risk: Brand activity on X could be questioned if AI image features are tied to harmful use cases.
  • Safety expectations: Stakeholders expect clear safeguards around minors and depictions of real people.

Immediate actions for brand and comms teams

  • Pause any campaign that uses AI image editing of real people, especially swimwear, minors, or suggestive contexts.
  • Audit workflows: Identify where Grok or similar tools touch image generation, editing, or prompt-based design.
  • Update guidelines: Add explicit rules on depicting real people, minors, and "revealing clothing" across all channels.
  • Strengthen approvals: Require legal and child-safety review for any AI-generated visuals that include people.
  • Crisis prep: Draft holding statements, an FAQ, and escalation paths for incidents tied to AI imagery.
  • Vendor checks: Ask agencies and partners to confirm their AI safeguards and geo-blocking controls.

Suggested holding lines (short and usable)

  • "We don't use AI to create or edit images of real people in suggestive contexts. We've reinforced controls to prevent misuse."
  • "We've paused any AI image work pending a review of safety, legal, and regional compliance requirements."
  • "We support regulator efforts to protect users, especially minors, and are aligning our policies accordingly."

Questions to put to X/xAI and your agencies

  • What specific filters and classifiers are live to prevent sexualised images of women and minors?
  • How is geo-blocking determined, and which jurisdictions are currently blocked?
  • What are the audit logs, rate limits, and enforcement pathways for policy-violating prompts?
  • How fast can you roll back or disable risky features if an incident occurs?
  • What's the process for responding to regulator inquiries and preserving evidence?

Operational guardrails to implement now

  • Default-off for AI features that edit images of real people; whitelist only with senior approval.
  • Prohibit prompts involving minors, age-ambiguous subjects, or "revealing clothing."
  • Add keyword and visual moderation for uploads and user-generated content tied to campaigns.
  • Train spokespeople on phrasing that prioritizes safety, compliance, and concrete actions over tech optimism.

Regulatory watchlist

  • Japan: Cabinet Office and AI strategy oversight signaling potential legal steps.
  • United Kingdom: Information Commissioner's Office is active on AI safety and data protection. ICO
  • Canada: Privacy authority coordinating AI investigations. Office of the Privacy Commissioner of Canada
  • Malaysia and Indonesia: Temporary blocks on Grok underscore regional sensitivity to explicit content.

If you're planning AI training for PR teams

  • Focus on policy-aware content creation, image safety, and incident response drills.
  • For structured options, see role-based AI course paths: AI courses by job

Bottom line: Treat AI image generation as high risk until controls are proven. Tighten policies, ask hard questions, and keep your spokespeople ready with clear, safety-first messaging.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide