AI Content Rules Updated: Government Prepares to Tighten Grip on Deepfakes
Deepfakes are moving from fringe novelty to real risk. To counter misinformation and abuse, the IT Ministry has drafted new rules focused on AI-generated audio, video, and images. The goal is clear: protect citizens, institutions, and elections from synthetic content that looks real but isn't.
Stakeholder feedback is open until November 6. If your department touches content, communications, or platform oversight, now is the time to weigh in.
Why this matters for government teams
Fake content spreads fast and hits hard. It can sway opinion, damage reputations, and create confusion at scale. Parliament has raised concerns after deepfake videos of public figures surfaced, and IT Minister Ashwini Vaishnaw confirmed the government is moving to identify and curb such misuse.
What the draft proposes
- Mandatory AI labelling: Any AI-generated video, audio, or image must be labelled before upload so viewers know it's synthetic.
- User authentication before upload: Platforms must verify user identity prior to posting content.
- Minimum originality threshold: AI content must include at least 10% original material, as stated in the draft.
- Platform accountability: Services with 5 million+ users (e.g., Facebook, X, YouTube) are responsible for detecting and flagging AI fakes.
The draft signals tighter controls on distribution, clearer attribution, and stronger traceability.
Risks the rules target
- Election interference: Synthetic media timed to smear candidates or mislead voters.
- Financial fraud: Voice cloning and fabricated endorsements that trick citizens.
- Public order: Manipulated content meant to inflame sentiment or incite unrest.
What government teams should do now
- Map exposure: Identify where your department publishes, hosts, or amplifies media. Include WhatsApp, YouTube, X, and departmental sites.
- Define AI labels: Standardize how "AI-generated" and "AI-edited" tags appear in posts, captions, and watermarks.
- Set identity checks: For official channels, require verified operators and audit logs for all uploads.
- Incident response: Create a 24/7 workflow for reporting, verifying, and escalating suspected deepfakes to platforms and relevant authorities.
- Election-time protocol: Pre-approve messaging, takedown criteria, and rapid approvals during the Model Code of Conduct period.
- Vendor clauses: Add compliance requirements for media agencies and tech partners (labelling, logs, watermark detection, response times).
- Evidence handling: Keep hashes, timestamps, and source URLs to support legal action.
- Training: Brief spokespersons, social teams, and helpdesks on spotting synthetic content and handling citizen queries.
How platforms will be held accountable
Large platforms must detect and flag AI fakes and ensure uploaders are verified. Expect more prompts at upload, stronger watermark checks, and faster takedowns. Agencies managing official pages should prepare for new compliance steps and stricter enforcement of community guidelines.
Timeline and input
Feedback deadline: November 6. Departments should consolidate inputs across legal, IT, communications, and field units. Submit practical feedback on labelling formats, identity verification methods, and feasible turnaround times for takedowns.
Implementation tips
- Standard captions: Use a consistent tag like "AI-generated visual" or "Synthetic voice sample."
- Watermark detection: Enable platform-native checks and pilot third-party detectors for internal verification.
- Whitelists: Maintain approved source libraries for official visuals and audio to ease authenticity checks.
- Cross-agency channel: Set up a shared escalation group with law enforcement, cyber cells, and media units.
Authoritative resources
Upskilling for compliance and communications
If your team needs a fast update on practical AI use and risk controls, explore focused learning by job role: AI courses by job.
Quick checklist
- Agree on AI labelling standards across all official channels.
- Enable identity verification for all upload operators.
- Document a clear review-and-takedown workflow.
- Add AI-compliance clauses to vendor contracts.
- Train frontline teams on deepfake detection and citizen guidance.
- Submit consolidated feedback to the IT Ministry by November 6.
The message is simple: synthetic media must be visible as synthetic, and those distributing it must be accountable. With the right controls in place, we protect citizens and keep public communication trustworthy.
Your membership also unlocks: