India weighs legal action against Musk's Grok after explicit AI image scandal

India's IT ministry may take legal action against Grok on X over AI-made explicit images of women and children. Paid-only access and warnings won't cut it unless enforced policy.

Categorized in: AI News Legal
Published on: Jan 11, 2026
India weighs legal action against Musk's Grok after explicit AI image scandal

Govt may move legally against Grok over objectionable AI-generated images

India's electronics and IT ministry (MeitY) is weighing legal action against Grok, the Elon Musk-owned AI chatbot run on X, for allegedly enabling the creation and spread of sexually explicit and objectionable images of women and children. The ministry is reviewing X's action-taken report even as potential proceedings against the platform and its AI tool remain on the table.

xAI recently limited Grok's image generation on X to paying and premium users after public blowback. Officials have indicated this step is insufficient unless declared as formal policy for India-and enforced consistently.

What MeitY has asked X to do

  • Remove or disable access-without delay-to all content generated or distributed in violation of applicable Indian laws, in line with the IT Rules, 2021.
  • Preserve all relevant evidence and prevent tampering.
  • Strictly enforce terms of service and AI usage restrictions.
  • Suspend or terminate accounts that used Grok to generate sexually explicit images involving women or minors.

X has responded to MeitY's notice seeking clarity on how Grok allowed such content via simple prompts. Platform-level warnings, including Elon Musk's statement that illegal use of Grok carries the same consequences as uploading illegal content, do not replace statutory compliance.

Paid-only access isn't a shield

Limiting image generation to paid users on X may reduce abuse, but it is not a compliance solution by itself. Reports indicate the standalone Grok app still allows image generation without a subscription, which weakens any argument that access controls meaningfully curb risk.

For Indian regulators, the test is whether unlawful content is prevented, removed promptly, and documented-across all distribution surfaces tied to the product.

Potential exposure under Indian law

  • Information Technology Act, 2000: Section 67 (obscene material) and 67B (material depicting children in sexually explicit acts) may be implicated.
  • POCSO Act and IPC Sections 292/293 can apply to creation, transmission, and circulation of sexually explicit content involving minors.
  • IT Rules, 2021 impose due-diligence duties on intermediaries, including swift takedown, user notice, and cooperation with lawful orders.
  • Safe-harbor under Section 79 hinges on "actual knowledge" response, grievance handling, and demonstrable enforcement. Gaps in product policy, moderation, or evidence preservation can put immunity at risk.

International pressure is mounting

The UK's Internet Watch Foundation reported allegedly criminal Grok-generated images circulating on the dark web. UK Prime Minister Keir Starmer called the images disgraceful and backed enforcement by Ofcom, which is implementing the Online Safety regime here.

In the EU, regulators have termed the images illegal and directed X to preserve internal documents related to Grok as part of an ongoing review under the Digital Services Act (DSA text). Coordinated oversight increases discovery risk and narrows room for inconsistent platform behavior by region.

Immediate actions legal teams should drive

  • Codify India-specific AI policy: Make the paid-only change (if retained) a formal, public policy for India with clear enforcement playbooks.
  • Close the "standalone app" gap: Ensure parity in access controls, filters, and takedown workflows across X and any separate Grok interfaces.
  • Strengthen guardrails: Deploy high-recall classifiers for sexual content and CSAM, prompt-level blocks, age-related controls, and hard denylists in Indian locales.
  • Evidence handling: Log prompts, outputs, model/version IDs, and enforcement actions. Preserve artifacts to comply with orders and to defend safe-harbor.
  • User enforcement: Automate account suspension/termination for prohibited use; maintain audit trails to prove consistent application.
  • Rapid takedown: Commit to strict SLAs for removal and document every step-intake, evaluation, action, and confirmation.
  • Red-teaming and audits: Run ongoing abuse tests specific to Indian law; validate precision/recall tradeoffs and escalate known failure modes.
  • Governance: Designate accountable officers, refresh ToS and community guidelines for India, and align incident response with legal thresholds for reporting.

What to watch next

  • Whether MeitY accepts X's action-taken report and India-specific policy changes, or proceeds with litigation.
  • Any directives covering the standalone Grok app and third-party distribution channels.
  • Orders on document preservation, cross-border data access, and potential penalties tied to repeat violations.
  • Convergence with UK/EU enforcement that could set stricter global benchmarks for AI image tools.

Bottom line

India is signaling that AI features enabling sexual exploitation risks will face strict scrutiny-product changes must be formalized, enforced, and provable. For counsel, the priority is closing policy and product gaps now, before MeitY or courts do it for you.

If your team is building internal AI policies and needs structured upskilling, see curated options by role here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide