India's AI Rulebook in the Making: Sovereignty, Copyright and Data Protection

India is scaling AI with major funding and IndiaAI as laws on IP, data, and liability take form. Counsel: prioritize licensing, privacy, safety, disclosures, and filings.

Categorized in: AI News Legal
Published on: Sep 13, 2025
India's AI Rulebook in the Making: Sovereignty, Copyright and Data Protection

India's legal framework for developing AI: what counsel needs to know now

India is scaling AI across industry while its rulebook takes shape. The government has earmarked about USD 11.7 billion for AI leadership with the Ministry of Electronics & Information Technology (MeitY) steering policy, and the India AI Mission acting as the catalyst for compute access, data quality, talent, and public-private execution. IndiaAI signals a clear push for "AI sovereignty" without defaulting to foreign templates.

The result: opportunity with real regulatory exposure. Below is a concise brief for legal teams advising on AI development, training, and deployment in India.

Intellectual property: datasets, outputs, and fair dealing

The Copyright Act, 1957 governs training data use and ownership of outputs. Reproduction is the right of the copyright owner; copying in full for commercial use risks infringement. Courts weigh three factors case by case: quantum/value copied, purpose, and market competition.

Section 52(1)(a) recognizes "fair dealing" for private use (including research), criticism/review, and reporting of current events. Precedents in RG Anand v. Delux Films and the "Rameshwari Photocopy" case stress transformative use within the idea-expression dichotomy. That said, Indian "fair dealing" is narrower than US "fair use" and courts have not yet squarely applied it to AI training.

Open questions are before the Delhi High Court in ANI v. OpenAI: whether storing copyrighted data is infringement; whether model outputs are derivative works; if "fair dealing" applies to training; and whether Indian courts have jurisdiction when servers are overseas. Expect a precedent that frames negotiations and risk allocation for training data use-commercial settlements are plausible until clarity lands.

  • Practical steps: license high-value datasets; log provenance; honor opt-outs; implement data minimization and deduplication; and maintain technical records of transformation.
  • Contract tips: clear IP warranties, indemnities for third-party claims, and carve-outs for training/benchmarking; output IP position (ownership vs. license) and derivative work disclaimers.

Data and IT laws: consent, research exemption, and platform duties

Training may involve non-personal data, but scraping and product telemetry can capture personal data. The IT Act, 2000 and SPDI Rules, 2011 require express consent for sensitive personal data collection, processing, disclosure, and transfer. The Digital Personal Data Protection Act, 2023 (DPDPA) extends consent to all personal data with notice and user rights (access, correction, erasure, withdrawal).

DPDPA Section 17(2)(b) allows processing for research, archiving, or statistical purposes if no decision is taken specific to an individual and prescribed standards are met. AI training could fit, but only if future rules align and the process isolates individuals from decisions. Until then, secure a valid legal basis and strong de-identification.

Intermediary Guidelines (2021) impose due diligence on intermediaries, including AI companies, to curb infringing, obscene, or impersonation content. Safe harbour under the IT Act is not absolute, especially given deepfakes and misinformation. MeitY's 2024 advisory pushed bias controls and output labelling, but its withdrawal leaves implementation standards unclear.

  • Practical steps: consent flows for personal data, purpose limitation, data retention caps, and opt-out tooling; DRM-respecting crawlers; privacy-by-design with documented DPIAs.
  • Platform hygiene: notice-and-takedown, repeat-abuse controls, provenance/watermarking where feasible, and clear user disclosures on AI fallibility.

Consumer protection: services and product liability

AI tools are likely "services" under the Consumer Protection Act, 2019. Product liability can attach for harm from faulty or biased algorithms, weak safety controls, or insecure software that leaks personal data. Expect stricter views for direct, high-risk use cases (healthcare, finance, employment, critical infrastructure).

  • Practical steps: safety-by-default settings, risk-tiered human oversight, incident response drills, and clear consumer disclosures on limitations, datasets, and known risks.

Sectoral rules: finance is moving first

Financial regulators have laid down specific transparency and accountability duties.

  • SEBI intermediaries (4 Jan 2019) and mutual fund ecosystem entities (9 May 2019): report AI system usage.
  • Mutual funds (27 Jun 2024): quarterly AI usage reporting with full disclosure.
  • Investment Advisers (regs: 16 Dec 2024; guidelines: 8 Jan 2025): disclose AI use in operations, regardless of scale.
  • Research Analysts (regs: 16 Dec 2024; guidelines: 8 Jan 2025): disclose AI tools; sole responsibility for client data security, confidentiality, integrity.
  • Intermediaries (regs: 10 Feb 2025): sole responsibility for data privacy/security, AI outputs, and legal compliance-irrespective of scale.

RBI: FREE-AI principles and shared infrastructure

In August 2025, the RBI proposed a Framework for Responsible and Ethical Enablement of AI (FREE-AI) urging balanced legislation. The seven "sutras": trust, people first, innovation over restraint, fairness/equity, accountability, understandable by design, and safety/resilience/sustainability. Recommendations span infrastructure, capacity, policy, governance, protection, and assurance, including shared data/compute and an AI Innovation Sandbox. See RBI for updates.

DoT: fairness assessments

The Department of Telecoms issued a New Standard for Fairness Assessment and Rating of AI Systems (2023). Expect growing demand for auditable fairness evaluations, metrics transparency, and independent testing in high-stakes deployments.

Your compliance playbook

  • Data mapping and provenance: inventory training, validation, and product data; track licenses, terms, and opt-outs; document transformations.
  • DPDPA readiness: consent, notices, user rights, purpose limitation, retention; de-identification that resists re-identification; cross-border transfer safeguards.
  • IP strategy: negotiate dataset licenses; implement filtering/deduplication; maintain training logs; set output ownership and derivative-work positions.
  • Safety and bias: pre-deployment testing, bias and performance documentation, model cards, content provenance markers, and human-in-the-loop for high risk.
  • Governance: an AI policy, risk classification, approvals, audit trails, red-teaming, and incident escalation. Assign a senior accountable owner.
  • Intermediary duties: notice-and-takedown, user reporting tools, and repeat-abuse controls; publish clear terms on AI usage and limitations.
  • Sectoral filings: build a calendar for SEBI disclosures; prepare evidence packs for regulators; align internal controls with RBI's FREE-AI principles.
  • Contracts: data protection addenda, security standards, audit rights, liability caps with specific carve-outs, and indemnities for IP and privacy claims.

What to watch

  • Delhi High Court's ruling in ANI v. OpenAI on training, storage, derivative outputs, and jurisdiction.
  • DPDPA rules for research/statistics processing and standards under Section 17(2)(b).
  • MeitY guidance on bias controls, labelling, and provenance requirements.
  • Sectoral expansion beyond finance: health, education, and employment regulators.

Bottom line

India is building AI capacity and guardrails in parallel. Until courts and ministries settle key questions, legal teams should rely on strong licensing, privacy compliance, safety testing, and transparent disclosures. Do the basics well, document your choices, and be ready to adapt.

Need to upskill cross-functional teams building AI under compliance constraints? Explore curated role-based programs at Complete AI Training.