India to prioritise AI innovation, regulate only when needed: IT Secretary
India will push hard on AI innovation first and legislate only if the need is clear, said S. Krishnan, Secretary at the Ministry of Electronics and Information Technology (MeitY). His message: build momentum, keep guardrails ready, and act with precision when risks surface.
"If we believe that the priority needs to be for innovation, regulation is not the priority today," he said. "Having said that, if the need arises for legislation or regulation, the government will not be found wanting."
Governance blueprint submitted to MeitY
A subcommittee under the IndiaAI Mission has submitted governance guidelines for AI companies operating in India. The report distils an eight-point set of principles that public and private builders can apply now.
- Transparency: Share meaningful information on how systems are developed, their capabilities, and their limits.
- Accountability: Clarify who is responsible for outcomes and for respecting user rights and the rule of law.
- Safety: Assess and mitigate risks across the AI lifecycle.
- Privacy: Protect personal data by default and by design.
- Fairness: Check for bias and ensure equitable treatment across user groups.
- Human-centred values: Keep human judgment in the loop wherever appropriate.
- Inclusive innovation: Enable access and benefits across regions and communities.
- Digital by design: Bake governance and auditability into systems from day one.
The committee recommends that developers provide clear user-facing documentation, not marketing copy. It also calls for mechanisms to assign and trace accountability across developers, deployers, and operators.
Human oversight is non-negotiable. The report advises meaningful human review, intervention paths, and checks to avoid blind reliance on automated outputs.
Deepfakes: traceability over blanket bans
While the government has issued advisories to curb deepfakes, the subcommittee notes that India already has legal tools to address malicious synthetic media. What's missing is traceability that works at scale.
The report proposes assigning unique, immutable identities to content creators, publishers, and platforms. These identities can be used to watermark inputs and outputs of generative tools, track a deepfake's lifecycle, and verify consent or violations.
What ministries, departments, and PSUs can do now
- Run pilots with human-in-the-loop review for any AI touching citizen services or internal decision support.
- Create a one-page "model card" for each system: purpose, data sources, limitations, failure modes, and escalation contacts.
- Set an accountability chain across vendor, integrator, and business owner. Document who signs off at each stage.
- Adopt risk tiers: low, medium, high. Apply stricter testing, audit, and red-teaming for medium/high tiers.
- Include transparency, audit logs, and data protection clauses in procurement contracts. Ask vendors for bias and security test results.
- Prepare a playbook for deepfake incidents: detection, takedown, communication, and evidence preservation.
- Use watermarking and content provenance tools where feasible, especially for official multimedia outputs.
- Train staff handling AI-assisted tasks on oversight, data hygiene, and user consent.
Who is steering the guidance
The subcommittee is chaired by Professor B. Ravindran of the Indian Institute of Technology. Members include IndiaAI Mission CEO Abhishek Singh, Debjani Ghosh, Advocate Rahul Matthan, and Sharad Sharma of i-Spirit, among others.
According to Krishnan, the government's stance remains human-centric: enable builders, protect citizens, and step in with legislation only when clearly required.
Get the source guidance and next steps
For ongoing updates and reference material, see MeitY and the IndiaAI portal:
If your department is planning structured AI upskilling for teams working on procurement, governance, or analytics, explore curated options by role here: Complete AI Training - Courses by Job.
Your membership also unlocks: