Malaysia, France, India Press X Over 'Offensive' Grok Images
Governments are turning up the heat on X after its AI chatbot, Grok, generated sexualized images, including depictions of minors, in response to user prompts. The incident has drawn sharp criticism and signaled potential enforcement from multiple countries.
Malaysia said it is investigating images produced by Grok following complaints about the misuse of AI to manipulate images of women and minors into indecent or grossly offensive content. France and India have also called out the platform, pushing for stronger safeguards and accountability.
Why this matters for public officials
AI image tools can scale harm in minutes, outpacing traditional moderation. When minors are involved, the stakes move from reputational damage to criminal liability and cross-border cooperation.
This moment is a stress test for platform governance, incident response, and whether companies can prevent foreseeable misuse with basic guardrails.
Immediate actions agencies can consider
- Preservation and takedown: Issue preservation requests for evidence, coordinate fast takedowns, and require auditable logs for generated content and prompts.
- Safety-by-default: Mandate NSFW filtering, child safety scanning (e.g., hash-matching), blocked prompts, and rate limits for image generation.
- Risk assessments: Require documented risk analyses for generative features, including abuse scenarios and mitigations for minors.
- Audit and red-teaming: Enforce independent audits, routine red-teaming against sexual exploitation risks, and periodic public reporting.
- User reporting and escalation: Ensure easy reporting, 24/7 escalation paths, and defined SLAs for removal and law-enforcement referrals.
- Transparency: Demand clear labeling of AI-generated images, incident reporting obligations, and access to technical information for regulators.
- Penalties and compliance: Tie non-compliance to meaningful fines and potential service restrictions for repeat or systemic failures.
Questions to put to X and Grok
- What prompt and output filters are active for sexual content, minors, and image manipulation involving real people?
- Are child safety databases used for detection and blocking? How are edge cases handled?
- What proportion of flagged content is removed within 1 hour, 24 hours, and 7 days?
- How are incidents reported to authorities, and what data is preserved to support investigations?
- What third-party audits validate these controls and their effectiveness over time?
Policy levers already on the table
France falls under the EU's Digital Services Act, which sets duties for very large platforms on risk management, transparency, and systemic mitigation. India's IT Rules outline obligations for intermediaries on user safety and prompt action on unlawful content.
- EU Digital Services Act overview
- India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
For Malaysia, existing communications and multimedia laws can support investigations and enforcement while broader AI safety standards are developed.
Procurement and public-sector use
- Contract clauses: Require child safety controls, content logging, audit rights, model update disclosure, and kill-switch capabilities.
- Data handling: Ban training on user uploads without consent and require deletion protocols for illegal content.
- Access controls: Enforce age gates, human-in-the-loop for risky features, and default-on safety filters for all public-sector deployments.
What happens next
This isn't just about one platform. It's a reminder that image generators must ship with guardrails that actually work. Governments have enough tools to demand that now-through investigations, audits, and enforceable timelines for fixes.
If your agency is building oversight capacity for AI tools, structured training can help teams evaluate risks, write enforceable requirements, and audit claims. See role-based options here: AI courses by job.
Your membership also unlocks: