Government to X: Fix Grok AI Misuse in 72 Hours or Risk Losing Safe-Harbour
The Government of India has issued a formal notice to X over the misuse of its Grok AI to generate sexualised images of women without consent. The platform has been told it is failing to enforce safeguards and is not adhering to Indian law.
The IT Ministry has asked X to submit a detailed response within 72 hours, citing "grave concern" over the risk to the dignity, privacy, and safety of women and children. The notice warns that continued hosting or facilitation of such content could strip X of legal immunity for third-party content.
What the government has demanded within 72 hours
- A detailed action-taken report on Grok AI's technical and organisational controls.
- Specifics on the India Chief Compliance Officer's oversight and interventions.
- Documentation of actions taken against offending content, users, and accounts.
Scope of the required Grok AI review
- End-to-end audit of prompt processing, output generation, and image handling.
- Verification of safety guardrails to ensure the AI does not generate, promote, or facilitate nudity, sexualisation, sexually explicit, or otherwise unlawful content.
Legal context and enforcement levers
The notice flags non-compliance with the IT Rules, 2021 and the Bharatiya Nagarik Suraksha Sanhita, 2023, especially around obscene, vulgar, pornographic, paedophilic, and otherwise unlawful content.
If X fails to "strictly desist" from hosting, displaying, uploading, transmitting, storing, or sharing such content, it may lose safe-harbour protections.
- Reference: IT Rules, 2021 (MeitY)
- Reference: Bharatiya Nagarik Suraksha Sanhita, 2023 (PRS)
Pattern of misuse highlighted
Users have been tagging Grok in public threads to generate altered, sexualised images of women, often from publicly available photos, and posting them in the same conversation. This exposes targets to harassment without their consent or knowledge.
Concerns raised by public representatives have prompted the government to signal stronger regulation of AI-generated content on social platforms, with a potential law under consideration.
Operational guidance for government teams
- Direct platforms to preserve logs, prompts, outputs, and account metadata for active investigations.
- Set tight takedown SLAs for sexualised deepfakes and synthetic nudity, with fast escalation paths to the CCO.
- Coordinate with cybercrime cells for victim assistance, evidence handling, and repeat-offender tracking.
- Require proactive detection of face-swaps and clothing manipulation, including watermark checks and perceptual hashing.
- Ask for transparent reporting: model versions, refusal rates for disallowed prompts, and enforcement numbers in India.
Expectations from X on safety guardrails
- Block image edits that target real people without provable consent; default-deny sexual content prompts.
- Strong filters for sexualisation of minors and public-figure impersonation; zero-tolerance policy for offenders.
- Friction in threads: disable generative image replies where misuse is detected; rate-limit mass tagging of the AI.
- Visible watermarking of AI outputs and tamper-resistant signatures for platform-level detection.
- Dedicated India moderation queue with 24x7 coverage and auditable actions.
Broader policy direction
The IT Ministry recently advised platforms to avoid hosting obscene or vulgar content and to tighten compliance frameworks. The parliamentary committee has recommended a strong law for social media regulation, and the ministry has indicated it is under consideration.
Why this matters for public sector teams
Unchecked AI misuse normalises harassment, chills women's participation online, and increases enforcement load downstream. Clear guardrails, faster takedowns, and measurable compliance are now baseline expectations for intermediaries operating in India.
Capacity building
If your department is setting up internal training on AI safety, moderation, and prompt controls, you can review curated public options here: AI courses by job role.
Your membership also unlocks: