India finalizing rules to label AI-generated content, pushing platforms and AI tools to flag deepfakes

India will soon mandate clear labels on AI-generated content, shifting from advisories to enforceable rules. Platforms and AI tool providers must add visible markers and metadata.

Categorized in: AI News Government
Published on: Jan 22, 2026
India finalizing rules to label AI-generated content, pushing platforms and AI tools to flag deepfakes

India set to require clear labels on AI-generated content: What government teams need to know

The government is close to finalising rules that will mandate clear labelling of AI-generated content. IT Secretary S. Krishnan said the draft is in its final legal vetting, with a simple goal: help people spot synthetic media before they mistake it for fact.

The announcement came at an industry event hosted by Nasscom. The signal is clear-policy is moving from advisories to enforceable obligations.

Who will be responsible

Two groups will carry the main obligations: providers of AI tools (e.g., ChatGPT, Grok, Gemini) and social media platforms (e.g., Facebook, YouTube). These are largely big tech firms with the technical capacity to implement labels and detection systems.

As Krishnan put it: "Labelling something as AI-generated content offers people the opportunity to examine it… you know that it is AI-generated and that it is not masquerading as the truth."

What the draft rules propose

  • Mandatory labels for AI-generated or altered content using prominent visual or audio markers.
  • Marker size/duration: at least 10% of the display or in the opening audio.
  • Embedded metadata to identify synthetic material and support traceability and accountability.
  • Stronger duties on platforms to detect and flag deepfakes and other synthetic media that could cause harm or misinformation.

These changes build on proposed amendments to the IT Rules, for which the ministry had earlier sought stakeholder feedback. For reference, see the IT Rules framework.

Why this matters for government teams

Deepfake audio and video present real risks: misinformation, reputational damage, election interference, and even financial fraud. Clear labels and metadata won't solve everything, but they raise friction for bad actors and give citizens a fair signal before they trust and share content.

Practical steps to get ready

  • Map your AI touchpoints: Inventory where your department uses AI (content creation, media edits, chatbots, translation). Note what gets published to official channels.
  • Update your social media SOPs: If a post includes AI-generated text, images, audio, or video, add a visible label on the asset (min. 10% of the frame) and an opening audio notice when relevant.
  • Adopt a metadata standard: Use provenance standards such as C2PA for embedded identifiers. Ensure your tools preserve metadata through export and publishing.
  • Fix procurement and contracts: Require vendors and agencies to label AI-generated assets, embed metadata, keep audit logs, and support takedowns. Add rights to verify compliance.
  • Tune platform settings: Configure detection/flagging features where available. Establish escalation paths with platform policy teams for rapid takedown of harmful deepfakes.
  • Stand up incident response: Define who reviews suspected synthetic media, how evidence is captured, and how requests are sent to platforms and law enforcement.
  • Train your staff: Run short sessions on spotting synthetic media, using labels consistently, and escalating edge cases. If you need structured upskilling, see Complete AI Training - courses by job.
  • Govern data and logs: Track when AI is used, store prompts/outputs where appropriate, and protect personal data. Document your labelling approach for audits.

No separate AI law-for now

There's no immediate plan for a standalone AI Act. The government believes existing laws are sufficient at this stage, though a future Act isn't ruled out. For now, expect enforcement to flow through the IT Rules once these amendments are notified.

What to watch next

  • Final text and timelines after legal vetting by the ministry.
  • Technical specifications for labels, watermarking, and metadata that platforms must support.
  • Enforcement guidance, including penalties and complaint mechanisms.
  • Clarity for smaller entities and accessibility requirements for labels and audio notices.

The direction is clear: if your team produces or posts content that touches AI, build labelling and provenance into the workflow now. It's simpler-and safer-to bake this in before the rules land.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide