India mandates AI labels and traceable metadata on deepfakes; Facebook and Instagram get 3-hour takedown clock

India's updated IT rules now require labels and traceable metadata on AI-made content, plus checks before upload. Some takedowns must happen within three hours.

Categorized in: AI News Government
Published on: Feb 11, 2026
India mandates AI labels and traceable metadata on deepfakes; Facebook and Instagram get 3-hour takedown clock

India's new IT rules make AI content labelling mandatory; 3-hour takedowns for key violations

Starting February 20, India's updated IT intermediary rules formally bring AI-generated content under compliance. Platforms must clearly label synthetic content, embed traceable metadata and unique identifiers, and ensure those labels can't be removed or altered.

The amendment-issued via gazette notification G.S.R. 120(E)-also compresses takedown timelines. For certain lawful orders, platforms will have as little as three hours to act.

What counts as AI-generated content (SGI)

The rules define "synthetically generated information" (SGI) as any audio, visual, or audio-visual content created or altered using a computer resource that could pass as real. That includes deepfakes, synthetic voice, and altered visuals designed to look authentic.

Routine edits-like colour correction, noise reduction, compression, or translation-are exempt if they don't change the original meaning. Research papers, training materials, PDFs, presentations, and hypothetical drafts using illustrative content are also out of scope.

What platforms must do

Before upload, platforms must ask users to declare if a post is AI-generated. They also need automated tools to verify the claim by checking the format, source, and nature of the content before it goes live.

If flagged as synthetic, the post must carry a visible disclosure and persistent metadata so the origin can be traced. If a platform knowingly lets violating content through, it risks being treated as having failed due diligence.

An earlier draft proposal to force a watermark covering 10% of the screen was dropped. Labels and metadata are still mandatory; fixed-size watermarks are not.

Enforcement windows are tighter

Response times have been cut sharply. Some lawful orders now require action within three hours (down from 36). The 24-hour window is reduced to 12 hours, and the 15-day window is now seven days.

SGI tied to child sexual abuse material, obscenity, false electronic records, explosives-related content, or deepfakes that impersonate a real person's identity or voice can trigger action under the Bharatiya Nyaya Sanhita, the POCSO Act, and the Explosive Substances Act.

Platforms must warn users at least once every three months- in English or any Eighth Schedule language-about penalties for misusing AI content. Acting against synthetic content as per these rules will not strip intermediaries of safe harbour under Section 79 of the IT Act.

What government teams should do this week

  • Update SOPs for issuing lawful orders to reflect the 3-hour, 12-hour, and 7-day windows. Build in standby coverage for nights and holidays.
  • Standardize order templates that cite the amended rules and include content hashes, URLs, and preservation instructions.
  • Designate a single nodal contact per department with a 24x7 escalation line for major platforms.
  • Set up evidence preservation workflows: collect originals, capture metadata, and document chain of custody.
  • For official accounts: label any AI-generated posts and ensure persistent metadata is embedded before publishing.
  • Coordinate with platform compliance teams on pre-verified channels for urgent takedowns and false-impersonation deepfakes.

Guidance for public information and comms units

Publishers of official content should assume that unlabelled synthetic media will be flagged or throttled. Adopt a simple rule: if it looks real but was generated or altered by AI, label it and embed persistent metadata.

Maintain an internal register of AI-generated posts, assets, and source files. Keep a quick-response note ready that explains the label to citizens when posts are questioned.

Procurement and vendor alignment

Update creative and media contracts to require SGI labelling, persistent metadata, and disclosure placement that is instantly visible. Add audit rights to sample-check deliverables before and after publishing.

Require vendors to document the tools and prompts used to generate content and to retain originals for a defined period.

Law enforcement and cyber cells

Prioritise deepfakes that impersonate identity or voice, CSAM, obscene content, explosives material, and false electronic records. Map quick triage pathways to the right statutes and courts.

Build a roster of platform contacts to fast-track three-hour actions. Align digital forensics teams to extract and verify embedded metadata without corrupting evidence.

What's exempt-and what isn't

Normal clean-up edits are fine if they don't twist meaning. Internal training decks and illustrative drafts are excluded.

If content is presented as real but is AI-created or materially altered, it needs a label and traceable metadata. That line is the test.

Compliance risks and safe harbour

For platforms, ignoring clear violations or skipping verification can be treated as failure of due diligence. For public bodies, publishing unlabelled synthetic media can erode credibility and trigger takedown friction.

Safe harbour under Section 79 remains in place when intermediaries act under these rules. The expectation now is fast, verifiable action, not best-effort delays.

Useful reference

Official IT Intermediary Guidelines and Digital Media Ethics Code Rules

Team upskilling (optional)

If your unit needs hands-on practice with AI content workflows-labelling, metadata, and review processes-consider structured training. Start here: AI Design Courses and AI Research Courses.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)