India puts social platforms on the clock: prominent AI labels and two-hour deepfake removals

India tightens IT Rules: platforms must pull illegal posts in 2-3 hours, with deepfakes and non-consensual nudity gone in 2. Labels are now required for AI content that looks real.

Categorized in: AI News Government
Published on: Feb 11, 2026
India puts social platforms on the clock: prominent AI labels and two-hour deepfake removals

New IT Rules: Faster takedowns, clear labels for AI content

The Union Government has notified amendments to the Information Technology Act, 2021 that tighten response times and mandate clear labels on photorealistic AI-generated content. The changes take effect on February 20 and focus on two things: speed of removal and transparency of synthetic media.

For government teams, this sets a higher bar for coordination with platforms and quicker execution on takedown orders-especially for deepfakes and non-consensual imagery.

What's new and time-bound

  • 2-3 hour window: Social media platforms must remove specified unlawful content within 2-3 hours (down from 24-36 hours).
  • 3 hours: Content deemed illegal by a court or an "appropriate government" must be removed within 3 hours.
  • 2 hours: Sensitive content-non-consensual nudity and deepfakes-must be removed within 2 hours.

These timelines require 24x7 readiness from both platforms and State/Central teams that issue or validate takedown orders.

"Synthetic" content: the official definition

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 define synthetically generated content as audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered in a way that appears real and depicts an individual or event as indistinguishable from a natural person or real-world event.

A carve-out exists for routine smartphone camera touch-ups. The final definition is narrower than the October 2025 draft, limiting scope to content that looks real and could be confused for authentic people or events.

Labeling duties and user disclosures

Platforms must seek disclosures from users when content is AI-generated. If a disclosure isn't provided, platforms must either prominently label the content or, in cases of non-consensual deepfakes, take it down.

Labels must be "prominent." The earlier draft's 10% image coverage requirement has been relaxed, giving platforms operational leeway while keeping the visibility requirement intact.

Safe harbour: where platforms can lose protection

As with existing IT Rules, failure to comply can lead to loss of safe harbour. If an intermediary knowingly permits, promotes, or fails to act on synthetically generated content in violation of these rules, it may be deemed to have failed due diligence-risking legal exposure.

Administrative update for States

The rules partially roll back the October 2025 change that limited each State to a single authorised takedown officer. States may now designate more than one such officer-an administrative fix for larger populations and higher case volumes.

What this means for government departments

  • Set up rapid-order workflows: Ensure court orders and "appropriate government" directions can be verified and dispatched to platforms within minutes, not hours.
  • Staff for 24x7 coverage: Create duty rosters for legal, IT, and law enforcement points of contact to meet the 2-3 hour windows.
  • Standardise templates: Use clear formats for orders, include legal basis, URLs, timestamps, and contact details for platform escalation.
  • Maintain a verified contact registry: Keep current platform escalation channels (emails, portals, phone lines) and test them regularly.
  • Document evidence quickly: For deepfakes and non-consensual content, capture hashes, source URLs, and context before removal.

Guidance for platforms (useful for government validators)

  • Collect user disclosures for AI-generated posts and tag them at upload.
  • Apply prominent labels to photorealistic synthetic media where disclosures are missing.
  • Remove non-consensual deepfakes and nudity within 2 hours of awareness.
  • Maintain audit logs to demonstrate timely action and preserve safe harbour protections.

Operational checklist for the 2-3 hour window

  • Single intake channel for government and court orders with automatic ticketing and timestamps.
  • Pre-approved response SOPs for categories: court/appropriate-government illegal content (3 hours) and sensitive deepfake/non-consensual content (2 hours).
  • Escalation tiers: frontline moderator → legal review → executive sign-off within strict time bounds.
  • Post-action reporting back to the issuing authority with action time, scope, and residual risks.

Labeling that meets the "prominent" bar

  • Visible, unambiguous markers on images and videos indicating "AI-generated" or "synthetic."
  • Labels present wherever the content appears (feed, profile, shares, previews).
  • No reliance on small tooltips or hard-to-find disclosures.

Immediate actions before February 20

  • Notify and train designated officers; add backups for nights and weekends.
  • Update SOPs to reflect the 2-3 hour takedown windows and evidence preservation steps.
  • Coordinate with major platforms to confirm contacts and expected response format.
  • Prepare public communication guidance for incidents involving deepfakes to reduce harm.

For reference materials and updates on IT Rules, see the Ministry of Electronics and Information Technology (MeitY) website: meity.gov.in.

If your team needs a fast primer on AI content, labeling practices, and platform workflows, explore role-based options here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)