Government proposes stricter AI and deepfake rules: mandatory labels and platform accountability
The government has proposed amendments to the IT Rules, 2021 to curb harm from deepfakes and synthetic media. The draft lays out a clear legal basis for labelling, traceability, and accountability, and places stronger due diligence obligations on large platforms.
Why now? Deepfake audio, video, and images are spreading fast and often look real enough to mislead, damage reputations, influence elections, or enable fraud. The goal is simple: help users know what is synthetic and what is authentic.
What the draft rules require
- A clear definition: synthetically generated content is information artificially or algorithmically created, modified, or altered by a computer resource in a way that appears reasonably authentic.
- Mandatory labelling and visibility: platforms must add prominent markers and identifiers to synthetic or modified content, covering at least 10% of the visual display or the first 10% of an audio clip. Metadata embedding is required to support traceability.
- User declarations: platforms must obtain a declaration from uploaders on whether content is synthetic, verify that claim with reasonable technical measures, and label or display a notice accordingly.
- No tampering: intermediaries cannot modify, suppress, or remove labels or identifiers once applied.
- Who is covered: significant social media intermediaries (SSMIs) with 50 lakh or more registered users, and platforms that enable creation or modification of synthetic content.
Minister's view
"In Parliament as well as many forums, there have been demands that something be done about deepfakes, which are harming society. People using some prominent person's image, which then affects their personal lives, and privacy. Steps we have taken aim to ensure that users get to know whether something is synthetic or real," said IT Minister Ashwini Vaishnaw. Mandatory labelling and visibility aim to create a clear distinction for users.
Compliance and enforcement
- Non-compliance after final notification could lead to loss of safe harbour protections for large platforms.
- Due diligence obligations are strengthened for SSMIs and for services that enable synthetic content creation or modification.
- Stakeholder comments on the draft are invited until November 6, 2025.
Scope, triggers, and messaging apps
- Obligations are triggered on dissemination. If a video is generated but not posted publicly, platform duties may not apply. Once posted, responsibility extends to intermediaries displaying the media and users hosting it.
- For messaging platforms, action is expected after they are put on notice to prevent virality of harmful synthetic content.
- This applies regardless of the tool used to create content, including widely known AI video or image generators, once content is shared at scale.
Why this matters for government teams
India is a top market for global platforms, and deepfakes are already in courtrooms. Recent examples include misleading ads about Sadhguru's "arrest" and lawsuits over alleged AI deepfake videos involving Aishwarya Rai Bachchan and Abhishek Bachchan. Expect more incidents during high-sensitivity periods like elections.
Immediate actions for ministries, agencies, and public-sector units
- Draft SOPs for official channels: when to label synthetic content used for outreach, what markers to apply, and how to preserve metadata.
- Set up an incident response playbook: intake, triage, rapid coordination with platforms, evidence preservation, and escalation paths for law enforcement.
- Update procurement and vendor contracts: require support for visible labels and provenance metadata; mandate logs for any content modification.
- Train media and PRO teams: spotting high-risk deepfakes, using authenticity checks, and issuing timely clarifications to the public.
- Coordinate with election and public safety authorities ahead of sensitive events; define fast takedown protocols for malicious synthetic content.
- Plan multilingual labelling standards so markers are clear across major Indian languages.
Operational checklist for platform engagement
- Ask how labels will persist across edits, downloads, and re-uploads.
- Review verification methods for user declarations and thresholds for false positives/negatives.
- Confirm that labels cover at least 10% of visuals or the first 10% of audio, and that metadata is embedded by default.
- Ensure platforms have controls to stop virality after notice on messaging services.
- Define reporting, audit logs, and monthly compliance summaries for oversight.
Key risks and open questions to watch
- Label fatigue: markers must be visible yet not ignored by users; consistency across apps matters.
- Interoperability: whether labels and metadata survive cross-platform sharing.
- Appeals and takedowns: how disputes over labelling or false flags will be handled.
- Creator tools: expectations for labelling by services that generate or modify media, not just social platforms.
Context and next steps
The draft seeks to make synthetic content clearly identifiable and limit harm from misinformation, impersonation, and fraud. Government teams should prepare internal protocols now, while contributing feedback before the consultation deadline.
Authoritative reference
Skills and capacity building
- For teams building practical literacy around generative AI and content authenticity, see job-focused training resources: Complete AI Training - Courses by Job
Your membership also unlocks: