Parliamentary panel seeks licenses, labels, tougher laws to curb AI deepfakes and fake news

Parliamentary panel urges legal-tech tools to curb AI fake news: labeling, licensing, tougher penalties. Recommends AI+human review, ministry coordination, clear standards.

Published on: Sep 15, 2025
Parliamentary panel seeks licenses, labels, tougher laws to curb AI deepfakes and fake news

Parliamentary panel seeks legal-tech framework to curb AI-made fake news

A parliamentary committee has asked the government to build concrete legal and technological tools to identify and prosecute those spreading AI-generated fake news. The draft report, chaired by BJP MP Nishikant Dubey, has been submitted to Lok Sabha Speaker Om Birla and will be tabled in the next session.

The panel calls for a balanced approach: AI can help detect misinformation, yet it can also create it. Its recommendations are not binding, but such reports often guide policy and enforcement.

Why this matters

  • AI-driven misinformation is a serious threat to public order and the democratic process.
  • Government, media, and tech teams need clear rules, shared standards, and tooling that work at scale.

Key recommendations

  • Close coordination between the Ministry of Information and Broadcasting, MeitY, and other relevant ministries to develop legal and tech solutions.
  • Explore licensing requirements for AI content creators.
  • Mandate labeling of AI-generated videos and content.
  • Amend penal provisions, increase fines, and fix accountability for offenders.
  • Require a fact-checking mechanism and an internal ombudsman in all print, digital, and electronic media organisations, developed through consensus with stakeholders.

What's already in motion

MeitY has set up a nine-member panel to examine deepfakes and related harms. Two projects underway focus on fake speech detection using deep learning and software to detect deepfake videos and images.

AI can flag suspected content, with human review as a second layer-this is the operational model the panel favors for now. For context on MeitY's initiatives, see the ministry site: MeitY.

Implementation guide for ministries and regulators

  • Form a joint task force (I&B, MeitY, Law, Home) with clear timelines, a public consultation process, and transparent reporting.
  • Define "AI-generated" and "materially manipulated" content, with thresholds for enforcement.
  • Standardize provenance and disclosure: watermarking, content credentials, and tamper-evident logs. Consider open standards such as C2PA.
  • Design due process: graded penalties, notices, appeal windows, and safe harbors for good-faith compliance by platforms and publishers.
  • Publish evaluation metrics for detection tools (precision, recall, false positives) and minimum performance baselines for procurement.

Action items for IT leaders and product teams

  • Build a two-layer moderation stack: AI models for triage, followed by trained human reviewers.
  • Ship labeling across surfaces: visible badges on posts, embedded watermarks in media, and API fields for downstream systems.
  • Track and audit model performance; maintain immutable logs and model/version registries for accountability.
  • Adopt provenance tooling: cryptographic hashes for media, signed content manifests, and end-to-end traceability.
  • Plan for licensing scenarios: document data sources, fine-tuning steps, contributors, and release notes for each model update.

What media organisations should set up

  • Internal ombudsman with a clear charter, independence, and service-level targets for complaints and corrections.
  • Pre-publication fact-checking workflows, prioritizing elections, security issues, and public health.
  • Source verification checklists for images, audio, and video; adopt authenticated provenance where available.
  • User reporting channels with transparent correction logs and post-publication updates.

Risks to manage

  • Over-reliance on AI for truth claims-keep humans in the loop for final decisions.
  • Bias and privacy concerns in training and evaluation datasets-use diverse, documented data and conduct regular audits.
  • Cross-border content and jurisdiction conflicts-coordinate with platforms and CERT-like bodies for response playbooks.

What's next

The report will be tabled in the next parliamentary session. If the government acts on it, expect draft rules, consultation papers, and pilot projects to follow.

Teams can start now: pilot detection tools, draft labeling policies, and train reviewers. For targeted upskilling by role, see AI courses by job.