Justice Nagarathna Seeks 24/7 Reporting and National Oversight to Curb AI-Driven Child Abuse

Justice B V Nagarathna urges a clear legal framework to curb AI-linked child abuse, warning of a "Sword of Damocles." Plans include 24-hour CSAM reporting, age checks, and audits.

Categorized in: AI News Legal
Published on: Oct 13, 2025
Justice Nagarathna Seeks 24/7 Reporting and National Oversight to Curb AI-Driven Child Abuse

Supreme Court judge seeks legal framework to curb AI-linked child abuse

Supreme Court judge Justice B V Nagarathna called for a clear legal architecture to address deepfakes and AI-enabled child abuse. Speaking at the closing session of the national consultation on 'Safeguarding the Girl Child' by the Supreme Court's Juvenile Justice Committee with UNICEF India, she warned that unchecked technology risks can feel like a "Sword of Damocles."

Her message was direct: treat AI-facilitated child harm as an urgent compliance and enforcement problem, not a future issue. The ask spans legislation, platform obligations, judicial capacity, and community safeguards.

Core legal proposals

  • 24-hour reporting mandate: Statutory obligation on platforms and intermediaries to report child sexual abuse material (CSAM) within 24 hours, with penalties for delay.
  • Age assurance at platform level: Practical age checks to reduce access and grooming risks; align identity, consent, and parental controls with privacy law.
  • National monitoring framework: Track takedown and response timelines across platforms; publish periodic compliance dashboards.
  • AI Cybercrime Advisory Committee on Girl Child: A specialized body to study AI risks, advise on standards, and support coordinated enforcement.

Operational implications for counsel

General counsel and compliance leads should map these proposals to existing duties under the IT Act and allied rules, POCSO, and the DPDP Act 2023. Expect tighter safe-harbour scrutiny tied to proactive reporting, age assurance, and verified takedown SLAs.

Companies offering generative features must treat synthetic CSAM and deepfakes as priority risks. This includes upstream model guardrails, content filters, and downstream reporting workflows that are auditable end-to-end.

Judicial and enforcement readiness

  • Judicial training: Build capacity to handle AI-linked evidence, chain-of-custody for synthetic media, and expert testimony on model outputs.
  • Professionalise anti-trafficking investigations: Standardised SOPs, digital forensics support, survivor-sensitive procedures, and cross-border cooperation.
  • Stronger enforcement of sex selection laws: Prevent female foeticide and infanticide through consistent inspections, prosecutions, and data-led audits.

Public health and education measures

Justice Nagarathna urged nutrition awareness in schools, clear definitions of junk food categories, and a ban on unhealthy food marketing near schools. Education remains a lever to expand opportunities for girls and counter the belief that daughters are a burden.

Data, audits, and accountability

Accurate, disaggregated data is central to policy and enforcement. Regular reviews of sex ratios and public reporting on platform response times can turn intent into measurable progress.

Action checklist for legal teams

  • Update legal registers: Map obligations under POCSO, Section 67B of the IT Act, IT Rules 2021 (Intermediaries/DPDP interfaces), and child safety advisories.
  • Codify a 24-hour CSAM workflow: Define intake, triage, preservation, takedown, and reporting with named owners and on-call coverage.
  • Age assurance program: Implement proportionate checks (risk-based), parental consent flows, and privacy-by-design assessments.
  • Model and filter controls: For AI features, enforce prompt-level, model-level, and output-level CSAM and grooming detection with periodic red-teaming.
  • Vendor contracts: Mandate CSAM detection, reporting SLAs, and audit rights for trust-and-safety and moderation vendors.
  • Transparency reporting: Publish response timelines and takedown statistics; prepare for a national monitoring regime.
  • Law enforcement interfaces: Maintain secure evidence preservation, lawful disclosure pipelines, and focal points for the National Cyber Crime Reporting Portal.
  • Train your teams: Litigation, investigations, and product counsel should be conversant with synthetic media forensics and AI risk controls. For structured AI upskilling, see practical courses by job role.

Why this matters

The proposed framework links technology risk with concrete duties across law, platforms, and courts. It aims to reduce harm in hours, not weeks, and to make compliance measurable rather than performative.

The direction is clear: faster reporting, credible age checks, auditable timelines, and trained institutions. For statutory context, review the Protection of Children from Sexual Offences (POCSO) Act, 2012 alongside intermediary and data protection obligations.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)