AI-Generated Content Rules Now Live: What Government Teams Must Action by Feb 20
The Ministry of Electronics and Information Technology (MeitY) has amended the IT Rules, 2021 to tighten response times and mandate clear labelling of AI-generated content. The new requirements take effect on February 20. If you work in policy, enforcement, or public communications, this changes your daily operations.
What's new at a glance
- Mandatory labels for "synthetically generated" images and videos on social platforms.
- Faster takedowns: 2-3 hours for government and court orders; tighter timelines for user complaints.
- User reminders of platform terms now at least once every three months, with clearer consequences for violations.
- Platform duties: user declaration + technical verification for AI content on services with 5M+ users.
Labelling AI-generated content (SGI)
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 require platforms to "prominently" label AI-generated media. Services with more than five million users must obtain a user declaration and run technical checks before publishing such content.
Purpose: limit deepfakes, misinformation, privacy harms, and threats to national integrity by making inauthentic media obvious to viewers.
What's excluded and what's banned
- Excluded from SGI labelling: automatic camera retouching on smartphones; film special effects.
- Prohibited SGI: child sexual exploitation and abuse material, forged documents, instructions for making explosives, and deepfakes falsely depicting a real person.
Detection and provenance
Large platforms must deploy reasonable technical measures to detect unlawful SGI and meet labelling/provenance/identifier requirements for permissible SGI. Many already use such tools.
Provenance standards like the C2PA specification can help when AI-based detection fails. See the initiative here: c2pa.org. For policy context and updates, refer to MeitY.
Compressed takedown timelines
- 2-3 hours to act on government and court takedown orders under Rule 3(1)(b).
- 1 week to address most user complaints (e.g., defamation, misinformation), down from two weeks.
- 36 hours to respond to user reports on "sensitive" content under Rule 3(2)(b), down from 72 hours.
Rationale: harmful content can cause damage well within prior windows; faster action is now required.
Stronger user notifications and consequences
Platforms must notify users of their terms at least once every three months. Notices must spell out the risks of harmful deepfakes and illegal AI content, possible disclosure of identity to law enforcement, immediate removal or blocking of content, and account suspension or termination.
What government teams should do this week
- Update SOPs for issuing takedown orders to align with the 2-3 hour clock. Clarify after-hours duty rosters and escalation paths.
- Designate a rapid-response point of contact for each major platform and confirm acknowledgement SLAs for urgent orders.
- Standardize templates for orders and notices that reference the amended rules and the relevant clauses.
- Prepare evidence kits for SGI cases: capture original links, hashes, timestamps, and any provenance data.
- Brief legal and comms teams on exemptions (e.g., film SFX, auto-retouch) to avoid overreach and reduce back-and-forth.
- Coordinate with law enforcement on identity disclosure workflows and chain-of-custody for prohibited SGI.
Guidance for platform-facing officers
- Confirm each platform's SGI labelling UX, user declaration flow, and pre-publication checks.
- Request documentation on detection tools and how they handle appeals, false positives, and provenance tags.
- Agree on takedown confirmation formats (ticket IDs, timestamps, content hashes) to meet audit needs.
- Test the 2-3 hour workflow with a limited dry run and measure end-to-end response times.
Risk hotspots to watch
- Election periods and civic events: surge in deepfakes and forged documents targeting public trust.
- Cross-posting: SGI can spread to smaller platforms lacking mature detection; track secondary vectors.
- Appeals backlog: faster removals can push more appeals; define clear, time-bound review steps.
Practical checklist
- Have a 24/7 on-call roster with platform contacts and legal sign-off lines clearly mapped.
- Maintain a shared dashboard for orders, deadlines, responses, and evidence artifacts.
- Document exemption logic to avoid removing lawful content (e.g., SFX, camera auto-edits).
- Log identity disclosure requests with statutory basis, scope, and retention limits.
If your team needs rapid upskilling
For structured learning on AI risk, detection, and policy, see role-based options here: Complete AI Training - Courses by Job.
The bottom line: labelling is now expected, detection is a platform obligation, and response windows are tight. Set up the people, process, and proof you'll need before February 20.
Your membership also unlocks: