India's AI Safeguards: A Legal Briefing for Counsel
India is pushing AI adoption while building guardrails-especially for children. In Parliament, Union Minister for Electronics and IT Ashwini Vaishnaw outlined the legal and regulatory measures now in force across content moderation, data protection, governance, awareness, and cybercrime response.
If your organization builds, deploys, or integrates AI systems-or runs an online platform-these are the operational requirements and legal touchpoints to track.
Intermediary obligations: time-bound removal and reporting
- Under the Information Technology Act, 2000 and rules, intermediaries must prevent hosting or sharing harmful child-related content (e.g., sexually explicit material or content inciting violence).
- Takedown timelines: remove unlawful content within 3 hours of a government or court notice; remove non-consensual sexual or intimate content within 2 hours.
- Mandatory reporting to authorities under statutes including the Protection of Children from Sexual Offences Act, 2012 and the Bharatiya Nagarik Suraksha Sanhita, 2023.
Legal takeaway: build verified escalation paths, 24/7 monitoring capacity, and documented SOPs that meet the 2-3 hour removal windows. Ensure incident logs and evidence handling align with reporting duties.
Data protection: children-first duties under DPDP and IT Rules
- The Digital Personal Data Protection Act, 2023 applies to personal data gathered via AI-enabled devices and toys.
- Verifiable parental consent is required before processing children's personal data.
- Prohibitions: no behavioural monitoring, tracking, or targeted advertising directed at children.
- Operational mechanisms include identity and age verification measures and virtual tokens to enforce consent and access control.
- IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011: collect data only for specified purposes and obtain consent before sharing.
Legal takeaway: default to no tracking for minors, embed age assurance in onboarding, and audit data flows for purpose limitation and sharing controls.
Responsible AI development: governance expectations
The India AI Governance Guidelines emphasize human-centric, responsible AI. Children are treated as a vulnerable group, with recommended risk assessment frameworks to monitor potential harms from AI systems.
Legal takeaway: integrate child-safety risk assessment into model development, testing, and deployment. Maintain review cycles for new features that could affect minors.
Cyber safety awareness: national programs you can leverage
CERT-In publishes safety tips, posters, infographics, and videos to build cyber hygiene-useful for staff, parents, and young users. Through the ISEA programme, more than 4,300 workshops have trained 9.63 lakh participants, and 1.13 lakh master trainers, with indirect outreach to about 15 crore beneficiaries.
Legal takeaway: fold CERT-In resources into compliance training and employee handbooks. Keep records of training completion for audit trails.
Cybercrime response and coordination: reporting and blocking
- The National Cyber Crime Reporting Portal enables reporting of cyber offences, with a focus on crimes against children.
- The Indian Cyber Crime Coordination Centre (I4C) coordinates national action, including online child exploitation cases.
- Authorities block websites hosting child sexual abuse material based on Interpol inputs via the CBI.
- ISPs are directed to block CSAM using global databases such as the Internet Watch Foundation (UK) and Project Arachnid (Canada).
- An MoU between the National Crime Records Bureau and the US National Center for Missing and Exploited Children supports sharing tipline reports for prompt action.
- The National Commission for Protection of Child Rights provides studies, guidelines, and "Being Safe Online" resources for children, parents, and educators.
Legal takeaway: ensure your incident response plan includes reporting routes, evidence preservation, and cooperation protocols with law enforcement and designated portals.
Action checklist for in-house legal and compliance
- Codify 2-3 hour takedown SOPs for flagged child-related content; maintain audit-ready logs.
- Set up mandatory reporting workflows referencing POCSO and BNSS obligations.
- Implement age assurance and verifiable parental consent for all child-facing features and AI-powered devices or toys.
- Disable behaviour tracking and targeted ads for minors by default; document technical controls and exceptions.
- Map data processing under IT Rules 2011: purpose limitation, consent for sharing, and security practices.
- Adopt an AI risk assessment process that explicitly evaluates child safety impacts and monitoring.
- Embed CERT-In/ISEA materials into ongoing training and keep completion records.
- Prepare law enforcement cooperation guidelines, including use of the national reporting portal and preservation orders.
- Review vendor and platform contracts to reflect intermediary duties, takedown timelines, and child-safety clauses.
Further resources for legal teams
Your membership also unlocks: