India orders X to rein in Grok over obscene AI content: what government teams need to know
Updated: January 3, 2026 . 10:40 AM UTC
India's IT ministry has directed X to implement immediate technical and procedural changes to its Grok AI chatbot after users flagged lewd and sexualized images, including images involving minors. The order gives X 72 hours to act and warns that non-compliance could put the platform's safe harbor protections at risk under India's IT law.
What the order requires
- Block the generation and distribution of content involving nudity, sexualization, sexually explicit material, or otherwise unlawful content.
- Prevent hosting or dissemination of obscene, pornographic, vulgar, indecent, sexually explicit, or pedophilic content.
- Submit an action-taken report within 72 hours, detailing technical and procedural safeguards deployed.
- Maintain compliance to retain "safe harbor" protections typically available to intermediaries under Indian law (see India's Intermediary Guidelines & Digital Media Ethics Code Rules, 2021).
What triggered the directive
Users and lawmakers reported that Grok could alter images of individuals-primarily women-to appear in bikinis. A formal complaint by Indian parliamentarian Priyanka Chaturvedi escalated the issue and pressed for stronger guardrails.
Separately, users flagged sexualized images involving minors generated via the chatbot. X acknowledged gaps in safeguards, removed the images, and advised users to report child exploitation material to the appropriate authorities, including the NCMEC CyberTipline.
Why this matters for government
India is a critical digital market, and this directive signals a firmer stance on platform accountability for AI-generated content. Expect closer scrutiny of AI safety controls, faster compliance timelines, and more explicit expectations for incident response.
Given X's ongoing legal challenge to India's content rules, agencies should prepare for parallel tracks: continued enforcement while litigation proceeds. The practical takeaway is simple-guardrails first, litigation second.
Immediate actions for public-sector teams
- Audit official accounts: confirm default safety settings, disable risky image-edit features, and restrict access to approved staff.
- Review contracts and MoUs: require platforms to meet Indian legal standards, provide safety documentation, and enable rapid takedown channels.
- Set incident playbooks: define who files notices to platforms and authorities, how evidence is preserved, and the timeline for follow-up.
- Mandate reporting paths: include protocols for child-safety escalation and coordination with law enforcement.
- Require safety attestations: ask for model safety briefs, red-teaming summaries, and logs of abuse-prevention updates after major incidents.
What platforms operating in India should do now
- Enforce strict content filters for image generation/editing, especially around minors, sexualization, and nudity.
- Combine prompt blocking, image hashing, and classifier checks pre- and post-generation; add human review for edge cases.
- Stand up a 24/7 escalation channel for MeitY notices; commit to response SLAs measured in hours, not days.
- Publish an action-taken report within 72 hours, then maintain weekly change logs while issues stabilize.
- Run continuous red-teaming focused on gender-based abuse and child safety; document findings and fixes.
- Instrument transparent appeal and takedown flows; keep auditable records to support safe harbor claims.
Enforcement outlook
Expect tighter enforcement windows, more prescriptive safeguards, and expanded liability exposure for non-compliance. Given the visibility of Grok's outputs inside X's public feed, future incidents will draw fast attention and faster orders.
This is a bellwether for how governments will hold AI-enabled platforms responsible across jurisdictions. If the Indian approach hardens, others may follow.
Optional resources
If your agency is building AI literacy and safety capacity, explore structured upskilling: Complete AI Training - Courses by Job.
Your membership also unlocks: