Parliamentary panel urges licensing, labels, and tougher laws to curb AI-driven fake news

Parliamentary panel urges new laws and tech to curb AI fake news, including licensing and clear labels. Expect tougher penalties, audits, and faster takedowns across media.

Categorized in: AI News Legal
Published on: Sep 15, 2025
Parliamentary panel urges licensing, labels, and tougher laws to curb AI-driven fake news

Parliamentary panel calls for new laws and tech to combat AI-generated fake news

New Delhi | Published: September 14, 2025, 17:15 IST

A parliamentary committee has urged the government to draft clear legal and technological tools to identify and prosecute those spreading AI-generated fake news. The Standing Committee on Communications and Information Technology, led by BJP MP Nishikant Dubey, has submitted its draft report to Lok Sabha Speaker Om Birla and expects it to be tabled in the next session.

What's on the table

The committee wants coordinated action across the Ministry of Information and Broadcasting, the Ministry of Electronics and Information Technology (MeitY), and other departments. It proposes exploring licensing requirements for AI content creators and mandatory labeling of AI-generated videos and other content.

  • Inter-ministerial coordination to build legal and technical solutions for detection and prosecution.
  • Feasibility study for licensing AI content creators.
  • Mandatory labeling of AI-generated content across formats.
  • Amendments to penal provisions, higher fines, and clearer accountability.
  • Mandatory fact-checking mechanisms and an internal ombudsman in all print, digital, and electronic media organisations.
  • AI to serve as a first layer of monitoring, with human review for final decisions.

Why this matters for legal teams

If adopted, these measures will create new duties around provenance, disclosures, and review workflows. Media houses, platforms, advertisers, and AI startups should prepare for licensing, labeling, auditability, and faster takedown protocols.

Contracts with creators, vendors, and platforms will need clauses on authenticity, watermarking/provenance metadata, cooperation with authorities, and indemnities for synthetic content misuse. Expect higher exposure for failure to label, repeated violations, or ignoring takedown and fact-check outcomes.

Enforcement and technology signals

MeitY has formed a nine-member panel to examine deepfakes and has two projects underway: one on fake speech detection using deep learning and another to detect deepfake videos and images. The committee underscores a balanced posture: AI can detect misinformation, but it can also generate it.

Quote from the report: "AI and machine learning (ML) technologies are increasingly being employed to enhance the ability to detect, verify, and prevent the spread of misinformation and disinformation."

Compliance checklist to start now

  • Inventory: Map where your organisation creates, hosts, distributes, or amplifies AI-generated content.
  • Labeling: Draft clear, consistent labels; plan for visible on-screen tags, captions, and file-level metadata.
  • Provenance: Adopt watermarking and content provenance standards (e.g., cryptographic signatures, tamper-evident logs).
  • SOPs: Build a two-tier review-AI flagging first, human verification next. Document thresholds and escalation paths.
  • Ombudsman: Prepare the charter, independence safeguards, and reporting cadence for an internal ombudsman.
  • Fact-check: Define sources, turnaround times, and correction policies. Preserve version histories.
  • Contracts: Update creator and vendor terms to require labeling, provenance, cooperation with regulators, and indemnities.
  • Incident response: Set up a mis/disinformation playbook-intake, triage, takedown, evidence preservation, and regulator notices.
  • Records: Retain logs, model prompts/outputs where lawful, and chain-of-custody for potential prosecution.
  • Training: Brief editorial, product, and legal teams on detection tools, labeling rules, and escalation.

Key legal questions to watch

  • Scope: What qualifies as "AI-generated" and which transformations trigger labeling?
  • Jurisdiction: How rules apply to cross-border platforms and creators.
  • Safe harbors: Treatment of intermediaries and due diligence standards.
  • Due process: Appeals against labeling, takedowns, and ombudsman findings.
  • Penalties: Calibration of fines, repeat-offender thresholds, and corporate officer liability.
  • Interaction with existing rules: Consistency with IT rules and data protection obligations.

Practical next steps

  • Run a gap assessment against likely licensing, labeling, and audit requirements.
  • Pilot AI-first moderation that routes high-risk items to human review.
  • Stand up your ombudsman function with independence and clear reporting lines.
  • Stage agreements and policies for quick updates once the government issues draft rules.

Official sources

Skills and capability building

If your legal and compliance teams need structured upskilling on AI risk and governance, review role-based programs here: AI courses by job.