India moves to label deepfakes: draft IT rules require prominent markers and stricter checks by big social platforms

India's draft IT Rules target deepfakes with clear labels, baked-in metadata, and platform checks. The aim: make synthetic content obvious, traceable, and faster to remove.

Categorized in: AI News Government
Published on: Oct 23, 2025
India moves to label deepfakes: draft IT rules require prominent markers and stricter checks by big social platforms

India moves to label deepfakes: draft IT Rules push clear markers, metadata, and platform accountability

The IT Ministry has proposed draft amendments to the IT Rules, 2021 to reduce user harm from AI-generated deepfakes and synthetic media. The plan: require clear labels, embed permanent identifiers, and hold major platforms accountable for verification and visibility. For public officials, this sets a more structured playbook for tackling misinformation, fraud, and impersonation at scale.

Why this matters

Generative tools have made synthetic audio, video, and images easy to produce-and hard to spot. That opens the door for election interference, reputational damage, non-consensual imagery, and financial scams. The draft rules aim to make synthetic content obvious to users, traceable for investigations, and easier to act on when flagged.

Key provisions in the draft

  • Definition: "Synthetically generated information" is content created, modified, or altered using a computer resource in a way that appears reasonably authentic or true.
  • Who's covered: Intermediaries, with stronger duties for social media intermediaries with 50 lakh+ users (Significant Social Media Intermediaries).
  • User declaration: SSMIs must obtain a declaration from uploaders on whether content is synthetic, and apply reasonable, proportionate technical measures to verify it.
  • Clear labelling: Synthetic content must carry a visible or audible notice that enables immediate identification. Platforms must ensure it is clearly labelled wherever it appears.
  • Permanent identifiers: Intermediaries offering tools that create or modify synthetic content must embed a permanent unique metadata tag or identifier in the content.
  • Visibility and audibility standards: The label must cover at least 10% of the visual surface area, or be audible during the initial 10% of an audio file's duration.
  • No tampering: Intermediaries are prohibited from suppressing, altering, or removing labels or identifiers.
  • Due diligence and protection: Intermediaries may receive statutory protection when they remove or disable access to synthetic content based on reasonable efforts or user grievances.

What government departments should do now

  • Update communication SOPs: Require official pages to flag any synthetic media used for demonstrations or education. Document how teams verify third-party content before sharing.
  • Strengthen grievance workflows: Ensure channels to report deepfakes are easy to use. Set SLAs for triage, takedown requests to platforms, and evidence preservation.
  • Coordinate with platforms: Establish points of contact with SSMIs to accelerate response on high-risk cases, especially during elections or public emergencies.
  • Procurement checks: For agencies using AI tools, require vendors to support metadata embedding and visible labelling by default.
  • Train frontline teams: Brief PROs, cyber cells, and election teams on reading labels/metadata and escalating suspected deepfakes.
  • Public awareness: Run short advisories on recognising synthetic labels and reporting procedures. Prioritise accessibility across languages and formats.
  • Legal readiness: Align with MeitY guidance on due diligence, retention, and evidence standards to support investigations.

Practical implications for enforcement

Labels and embedded identifiers will make it easier to distinguish genuine content from fabricated clips and speed up platform actions. Expect more consistent takedown pathways and clearer thresholds for intervention. Agencies should prepare to use metadata in case handling and public advisories.

Timeline and participation

MeitY has invited feedback on the draft amendments until November 6, 2025. Departments should consolidate inputs across legal, IT, communications, and field units and submit a unified response.

Bottom line

The draft rules set a clear signal: synthetic media should be declared, labelled, and traceable. For government teams, this is a prompt to tighten verification, speed up response protocols, and prepare the public to spot deepfakes before they spread.

If your team needs quick upskilling on AI literacy and content authenticity, explore focused options here: AI courses by job role.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)