India's 3-hour deepfake takedown rule: What legal teams must action before Feb 20
India has tightened the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules with the 2026 amendments, notified on Feb 10 and effective Feb 20. The changes compress takedown timelines, impose proactive controls for synthetically generated information (SGI), and require quarterly user warnings.
For in-house counsel and platform compliance heads, this is an operational mandate for Social Media teams, not a policy memo. Three-hour removals, permanent provenance, and automated screening now sit squarely inside statutory expectations.
Key changes at a glance
- Three-hour compliance: Court-ordered or law enforcement-directed takedowns, including deepfakes, must be executed within 3 hours (down from 36).
- Two-hour removals: Non-consensual nudity must be removed within 2 hours (down from 24).
- Grievance redressal: Response window halved to 7 days.
- Quarterly user warnings: Intermediaries must notify users every 3 months of consequences for violating ToS/privacy policy/user agreements, including withdrawal of access and legal penalties.
- SGI regime: New definition, proactive detection, and mandatory labeling with permanent metadata/provenance.
- Prohibited SGI: CSAM, non-consensual nudity, obscene/sexually explicit material, false documents/records, content about procuring explosives/arms/ammunition, or deceptive depictions of real persons/events.
- SSMIs: Additional duties-user SGI declarations and verification using technical measures.
What counts as "synthetically generated information" (SGI)
SGI includes audio, visual, or audiovisual content that is artificially or algorithmically created or modified to look real and be indistinguishable from actual persons or events. This definition reflects current Research into synthetic media and detection techniques.
What is not SGI: Good-faith editing or technical adjustments-formatting, enhancement, color correction, noise reduction, transcription, compression-when they do not materially alter or misrepresent the substance or meaning. Routine creation and presentation of documents, decks, PDFs, educational or research materials are also excluded if they do not distort underlying content.
Proactive controls and labeling requirements
Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent generation or sharing of unlawful SGI. This is explicit for categories like CSAM, non-consensual imagery, false records, and deceptive portrayals of people or events.
Where SGI is lawful, it must be prominently labeled: visible markers for visual content, prominent prefixes for audio, and embedded metadata or provenance (including a unique identifier of the computer resource used to generate the content). Suppressing, modifying, or removing labels or metadata is expressly prohibited.
Added duties for platforms offering SGI features
Platforms must warn users that unlawful SGI can trigger penalties, content removal, account suspension/termination, disclosure of identity to complainants, and mandatory reporting under the POCSO Act and BNSS.
Significant Social Media Intermediaries (SSMIs) have to capture user declarations where content is SGI and verify the accuracy of those declarations through technical means.
The three-hour standard: operational realities
The compressed timelines demand 24/7 coverage, law-enforcement integration, and automation. Expect to set up rapid response cells, pre-approved playbooks, and escalation ladders that function outside normal business hours.
Two-hour takedowns for non-consensual nudity will likely require prioritized queues, tuned classifiers, and a human-on-call model to avoid misses and over-removal.
Safe harbor tension and risk exposure
These duties push intermediaries closer to active monitoring. As one expert noted, requiring measures to prevent creation and dissemination of unlawful SGI, plus authorizing takedowns and other actions, can blur safe harbor contours by implying direct and actual knowledge-raising liability exposure if controls fail or are inconsistently applied.
Expert perspectives
- "Mandatory transparency through permanent metadata and prominent labeling ensures users can distinguish AI content from reality. Slashing takedown timelines to three hours enforces rapid accountability and proactive tools against non-consensual imagery." - Advocate Yashaswini Basu
- "Implementation is another matter. Synthetic realistic content is everywhere. Without automated monitoring at scale, this risks becoming a well-intentioned but empty rule." - Senior advocate Srinath Sridevan
- "Near real-time takedowns and automated verification of 'synthetic' content risk incentivising platforms to err on the side of censorship." - Advocate Vikash Kumar Bairagi
- "Key challenges: uniformly identifying synthetic content, balancing compliance with privacy and free speech, and feasibility across platforms of different sizes." - Advocate Ankit Konwar
- "All lawful synthetic content must be clearly labeled with permanent provenance. The direction is clear: accountability, transparency, and technical safeguards." - Advocate Suhael Buttan
- "Implementing provenance mechanisms within 10 days is aggressive, given the complexity at scale. Determining which datasets are exempt from labeling may remain inconsistent." - Advocate Huzefa Tavawalla
- "Intermediaries' prevention duties may dilute safe harbor, exposing them to actual liability for unlawful content." - Advocate Arya Tripathy
- "Permanent metadata and unique identifiers enhance traceability and deter impersonation and non-consensual imagery. Platforms are expected to prevent harmful synthetic content upfront." - Advocate Rashmi Deshpande
- "Users generating unlawful synthetic content remain independently liable. Expect more friction, disclosures, and restricted features for AI tools." - Advocate Ankit Sahni
Compliance checklist for counsel (next 10 days)
- Stand up a 24/7 moderation and legal response desk with SLAs aligned to 3-hour and 2-hour windows.
- Update ToS, privacy policy, and in-product notices to enable quarterly user warnings and SGI obligations.
- Publish clear user prohibitions on unlawful SGI; implement declaration flows for SGI uploads and shares (SSMIs: include verification gates).
- Deploy or tune classifiers and hash-matching for CSAM, non-consensual nudity, deceptive deepfakes, and false documents; create a prioritized incident queue.
- Build labeling pipelines for lawful SGI: visual badges, audio prefixes, and durable metadata/provenance with unique generator identifiers.
- Block suppression or alteration of labels/metadata at creation, editing, and distribution layers; add integrity checks.
- Map reporting paths for POCSO/BNSS triggers; pre-draft notice templates and evidence preservation steps.
- Establish law enforcement request intake with authentication, ticketing, and audit trails; rehearse drills.
- Reduce grievance redressal SLA to 7 days; calibrate triage to surface SGI and non-consensual cases first.
- Set retention for logs, model outputs, and provenance data consistent with privacy law and evidentiary needs.
- Run bias and false positive reviews on automated tools; document override criteria and human review checkpoints.
- Amend vendor contracts and creator terms to require provenance, labeling, and response cooperation.
- Brief executives and creators; train moderators and customer support on new definitions and timelines.
Open questions to track
- Standardization of provenance formats across ecosystems and devices.
- Error rates in automated detection vs. user rights, and appeal mechanisms that still meet timelines.
- How "unique identifier of the computer resource" is defined and validated across cloud and on-device generation.
- Interplay with data protection and retention limits for metadata and logs.
- Enforcement consistency for small and niche platforms, and cross-border applicability.
Reference materials
- IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 - MeitY
- Protection of Children from Sexual Offences Act, 2012 - India Code
Bottom line: the government expects prevention, speed, and traceability. If your platform touches AI generation or distribution, treat this as a production incident-ship controls first, refine later, and keep a tight paper trail.
Your membership also unlocks: