Fighting deepfakes needs nimble but realistic laws
Generative AI has made content cheap, fast, and persuasive. It has also supercharged misinformation, identity fraud, and non-consensual synthetic media. India woke up to the risk in 2023 when a deepfake of actor Rashmika Mandanna went viral, prompting high-level concern and a wave of Delhi High Court reliefs against AI chatbots, deepfake videos, and pornographic fabrications.
Courts have pushed platforms to act quickly, but litigation alone can't keep up with the volume. The government has now moved. An amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, effective 15 November 2025, directly targets synthetically generated information and tightens duties for social media intermediaries.
What the 2025 amendment changes
The amendment introduces a statutory definition of "synthetically generated information": content artificially or algorithmically created, modified, or altered using a computer resource in a way that appears reasonably authentic or true. This mirrors the approach of the EU's AI Act and global labelling efforts, including China's rules on AI-generated content.
Key duties now sit with Social Media Intermediaries (SMIs) and Significant Social Media Intermediaries (SSMIs):
- Prominent labelling: If a platform allows creation or dissemination of AI content, it must ensure clear labelling or embed permanent, unique identifiers/metadata.
- Label size and duration: Visual content labels or disclaimers must cover at least 10% of total surface area. Audio warnings must play for the first 10% of total duration.
- Creator declaration + verification: SSMIs must require uploaders to declare whether content is synthetic and deploy reasonable, appropriate technical measures (including automated tools) to verify those declarations.
- Mandatory disclaimer on confirmation: Where declaration or verification confirms synthetic origin, a clear, prominent disclaimer/label must be shown.
- Takedown without waiting for orders: Removal of synthetically generated content no longer hinges on a court order or government notice. SSMIs must remove such content using reasonable efforts.
- Safe harbour risk: Non-compliance can jeopardize safe harbour under section 79 of the Information Technology Act, 2000.
This is a shift from reactive court-led removals to proactive platform responsibility. It also aligns India with international moves to label and track AI media, similar in spirit to the EU AI Act.
Why this matters for legal teams
The amendment closes the gap that let synthetic content circulate while courts caught up. But it also hands platforms wide discretion to assess what's synthetic, which can create inconsistent standards. Expect disputes over false positives, satire, filters, and legitimate creative uses of AI.
The stakes are high. For SSMIs, the cost of getting it wrong now includes losing safe harbour. For brands, creators, and public figures, the amendment provides a faster path to removal, but proof, provenance, and identity verification still decide outcomes.
Practical compliance for SMIs and SSMIs
- Update T&Cs and product flows: Add uploader declarations for synthetic content; log consent artifacts and timestamps.
- Deploy verification: Use a mix of AI classifiers, content provenance checks (e.g., metadata, hashes), and human review for edge cases.
- Implement labelling at render time: Enforce the 10% visual and first 10% audio rules across web, apps, and embeds; block removal of labels on re-uploads.
- Content provenance and watermarking: Support persistent identifiers and metadata; preserve EXIF/XMP where possible; resist metadata stripping.
- Reasonable efforts playbook: Define SOC-style runbooks for detection, escalation, and removal with strict SLAs; track actions for audit.
- Appeals and transparency: Provide a channel for users to contest labels or removals; document criteria to reduce claims of arbitrary enforcement.
- Evidence handling: Preserve originals, hashes, headers, and access logs to support investigations and litigation.
- Training: Educate moderation, trust & safety, and legal teams on the new standards, including edge cases and bias risks.
Risk issues in the grey areas
- What counts as "synthetic": Is color correction, face smoothing, or voice enhancement synthetic? The definition focuses on content that appears reasonably authentic or true; create internal thresholds and examples.
- Satire and parody: Labelling may suffice, but removal can still be justified if there is impersonation, harm, or unlawful elements.
- Cross-posting and embeds: Ensure labels persist across shares, downloads, and third-party embeds; detect and relabel where metadata is stripped.
- False declarations: Maintain penalties for misdeclaration and repeat offenders; combine declarations with technical checks.
- Creator privacy vs. transparency: Balance disclosure with data protection; only collect what you need for verification and audits.
For litigators and rights holders
- Faster takedowns: You no longer need to start with a court order. Use platform channels citing the 2025 amendment and provide identifiers, links, and harm statements.
- Evidentiary best practices: Capture source URLs, hashes, device metadata, and timestamps; preserve chain-of-custody for potential criminal complaints.
- Multi-platform strategy: Demand labelling plus removal where misleading harm is clear; insist on blocking re-uploads via hashing.
- Impersonation and likeness claims: Combine IT Rules with passing off, privacy, and other applicable torts or statutory claims.
Policy gaps that need attention
- Consistent standards: Leaving assessments to platforms will lead to uneven outcomes. An inter-ministerial body could set uniform criteria and reference datasets.
- Licensing and registries: Consider licensing for AI content creators and a registry for high-risk generators to improve traceability.
- Technical baselines: Specify minimum technical standards for identifiers, watermarking, and provenance so evidence holds up in court.
- Clear liability tiers: Distinguish duties for generators, distributors, and hosting services; align safe harbour with good-faith detection and response.
Action list for general counsel
- Map your features against SMI/SSMI obligations; confirm your status and exposure under section 79.
- Stand up a synthetic content program: declarations, verification, labelling, and removal workflows with SLAs.
- Build an appeals and error-correction process to address false positives and protect speech.
- Run red-team exercises on deepfake scenarios: political content, celebrity impersonation, financial scams, and sexualised deepfakes.
- Coordinate with PR and incident response on public disclosures when high-profile cases break.
- Document your "reasonable efforts" rigorously; this will be central to safe harbour arguments.
The bottom line
The amendment is a necessary step: it puts clear duties on platforms and removes the wait for court or government orders. The challenge is execution-drawing the line between deepfakes and legitimate creativity, and doing it at scale without trampling speech.
Until uniform standards land, legal teams should default to clarity: verify, label, remove where required, and keep airtight records. That's how you keep users safe and your safe harbour intact.
Your membership also unlocks: