Govt clarifies: Label AI content, don't restrict it
The government has proposed amendments to the IT rules that focus on transparency, not control. The core ask: label AI-generated or AI-modified content clearly so people know what they're seeing and can judge it for themselves.
Electronics and IT Secretary S Krishnan said the aim is straightforward - label, don't restrict. Users can still post synthetic content, but it should be marked as such.
What's changing (proposed)
- Clear legal basis for labelling, traceability, and accountability of synthetically generated information.
- Mandatory labels that are prominent and cannot be deleted; metadata should be embedded to distinguish synthetic from authentic media.
- Joint responsibility: users, AI service providers, and social media platforms share the duty to enable and maintain accurate labels.
- Significant social media intermediaries (50 lakh+ users) must use reasonable technical measures to verify and flag synthetic information.
- Enforcement remains focused on unlawful content in general, not AI content per se.
- Draft open for stakeholder comments until November 6, 2025.
What counts as "synthetic"
- AI-generated images, text, audio, or video.
- Deepfakes or voice clones.
- Materially modified media where AI tools alter meaning or context.
Why this matters for government teams
Public trust rests on clarity. Official channels increasingly use AI for drafts, translations, images, and voice. These rules set a baseline: disclose synthetic elements, embed metadata, and keep a trail.
Immediate actions for departments and PSUs
- Update social media and communications SOPs to require visible labels on any AI-generated or AI-edited content.
- Adopt tools that support non-deletable labels and embedded metadata; avoid tools that don't offer this capability.
- Add procurement clauses requiring providers to enable prominent, persistent labelling and logs.
- Train teams on when and how to label; create a quick reference guide for content creators and approvers.
- Maintain audit trails (prompts, models used, time of generation, editor) to support traceability.
- Set up rapid review and takedown/escalation flows for suspected deepfakes targeting officials or public programs.
- Coordinate with platform partners on verification and flagging processes.
How to implement labels that stick
- Use persistent on-screen labels (e.g., "AI-generated" or "AI-edited") on images and videos, plus embedded metadata.
- Adopt open standards for provenance and content credentials (for example, C2PA) to improve interoperability across tools and platforms.
- Configure AI tools to auto-apply labels and metadata at export; block export if labelling fails.
- Test for removal attempts: re-uploads, recompression, cropping, or format changes should not strip the metadata or label.
- Ensure accessibility: labels should be screen-reader friendly and visible on mobile.
For IT, legal, and platform liaison teams
- Map current AI usage across units; identify content streams that need labelling.
- Set detection thresholds and review workflows for suspected synthetic content circulated about your department.
- Align data retention for prompts and generation logs with legal and RTI considerations.
- Prepare a template for stakeholder feedback before the consultation deadline.
Timeline and next steps
Form an internal working group now (communications, IT, legal, training). Pilot labelling on priority content streams. Submit comments by November 6, 2025.
Reference: IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 | C2PA Content Credentials standard
If your team needs practical upskilling on AI use, safety, and disclosure, see curated resources at Complete AI Training - courses by job role.
Your membership also unlocks: