Government gives X until January 7, 2026 to file Action Taken Report on AI-generated obscene content
The government has granted X additional time, until January 7, 2026, to submit a detailed Action Taken Report after issuing a stern warning over indecent and sexually explicit content generated through misuse of AI features like Grok and related tools. The extension follows a request from X, led by Elon Musk, after the platform was told to act within 72 hours of a January 2 directive.
On January 4, X's Safety team stated it would remove illegal content, permanently suspend offending accounts, and work with local authorities. It also clarified that users prompting Grok to create illegal content would face the same penalties as those uploading it directly.
What changed
The IT Ministry initially set a 72-hour window (effectively by January 5) for X to submit an Action Taken Report. Following X's request for more time, the deadline has been extended to January 7.
Core government directives issued on January 2
- Immediately remove all vulgar, obscene and unlawful content, including content generated or assisted by Grok.
- Enforce user terms and AI usage restrictions; suspend or terminate violating accounts.
- Disable access to offending content without delay and within IT Rules, 2021 timelines, while preserving evidence.
- File a detailed Action Taken Report covering technical and organisational safeguards, Chief Compliance Officer oversight, enforcement taken, and compliance with mandatory reporting under Indian law.
Why this matters for safe harbour
The Ministry reiterated that compliance with the IT Act and Rules is mandatory. Safe harbour protections under Section 79 are conditional on strict due diligence.
Non-compliance risks loss of safe harbour and exposure to action under applicable laws, including the IT Act and the Bharatiya Nyaya Sanhita. The bar is clear: prevent, detect, remove, and report.
What the Action Taken Report must demonstrate
- Effective safeguards in Grok and related features to prevent prompts and outputs that produce illegal content.
- Active oversight by the Chief Compliance Officer and auditable processes for detection and takedown.
- Swift enforcement against offending users/accounts, including permanent suspensions.
- Mechanisms for timely reporting to law enforcement and cooperation without vitiating evidence.
Timelines and due diligence
The platform must remove or disable access to unlawful content promptly, in line with IT Rules, 2021 timelines. Demonstrable, auditable compliance is expected on an ongoing basis, not just in response to incidents.
For reference, see the IT Rules, 2021 as notified by MeitY: Intermediary Guidelines and Digital Media Ethics Code Rules, 2021.
International scrutiny
Beyond India, X is under watch in the U.K. and Malaysia. Ofcom has flagged concerns about Grok producing undressed images of people and sexualised depictions of children and has contacted X and xAI for details on user protection steps.
Ofcom's online safety oversight framework is here: Ofcom - Online Safety.
Immediate priorities for government stakeholders
- Monitor X's compliance against the January 7 deadline; require a point-by-point ATR mapped to the January 2 notice.
- Ensure evidence preservation while enforcing takedowns; coordinate with law enforcement for CSAM and other illegal material.
- Verify that X has intensified proactive detection for prompts, image manipulation, and synthetic outputs targeting women and children.
- Assess the role and accountability of the Chief Compliance Officer, including escalation paths and audit trails.
- Prepare for follow-up action if due diligence gaps persist, including initiating proceedings tied to loss of safe harbour.
What to watch by January 7
- Clear, technical safeguards in Grok that block generation and distribution of illegal content.
- Quantifiable enforcement metrics: content removed, accounts terminated, and response times.
- Documented cooperation with authorities and adherence to mandatory reporting requirements.
- Auditable processes showing continuous, not one-time, compliance.
The message is unambiguous: platform-level guardrails must work, and enforcement must be fast. If X cannot meet due diligence obligations, statutory protections will not apply.
Strengthening internal capability
Agencies building capacity in AI risk, content moderation, and audit practices may benefit from structured upskilling. Curated resources by job role can help teams get aligned: AI courses by job role.
Next milestone: January 7, 2026. Expect a detailed ATR with measurable safeguards, tougher enforcement, and evidence of ongoing compliance.
Your membership also unlocks: