India's PSA Proposes National AI Risk Registry And A Techno-Legal Playbook
India's Office of the Principal Scientific Advisor (PSA) has proposed a national database to log, classify, and analyse AI risks and incidents. The goal is simple: track real harm, fix weak points, and keep innovation moving without blind spots. For researchers and labs, this is a signal to operationalise governance, not just discuss it.
- Central registry for safety failures, bias, security breaches, and misuse
- Techno-legal framework: legal safeguards + technical controls + clear institutions
- No standalone AI law for now; sectoral rules and targeted amendments instead
- Stronger post-deployment monitoring, audits, disclosures, and human oversight
What The National AI Risk Registry Would Do
The proposed registry would capture incidents from public bodies, private companies, researchers, and civil society. It focuses on safety failures, biased outcomes, security lapses, and misuse.
Beyond reporting, it's built for learning. It would enable an India-specific risk taxonomy, detect systemic trends and emerging threats, support data-driven audits, and refine both technical and legal controls over time.
Techno-Legal Framework: Encode Duties Into Systems
The PSA recommends moving away from command-and-control enforcement. Instead, encode legal duties into system design: guardrails, logs, attestations, audit trails, and kill switches where needed.
Core anchors include privacy, security, safety, and fairness-plus transparency, accountability, explainability, and provability. Expect lifecycle controls spanning data collection, training, deployment, monitoring, and decommissioning.
Institutions To Coordinate And Scale Oversight
An AI Governance Group (AIGG), chaired by the PSA, is proposed to align ministries and regulators. Its focus: responsible innovation, sector deployments with clear guardrails, tracking emerging risks, and recommending legal changes.
A Tech and Policy Expert Committee (TPEC) under MeitY would provide multidisciplinary depth across law, policy, ML, AI safety, and cybersecurity. An AI Safety Institute would evaluate high-risk systems, build safety tooling, drive capacity building and training, and engage globally.
No Standalone AI Law (For Now)
The report advises against a single AI law at this stage. Instead, close gaps through sectoral guidelines and targeted amendments. This keeps policy flexible while risks and use cases evolve.
Enforcement should scale through standardised and automated checks. India's digital public infrastructure (DPIs) can help reduce compliance burden, especially for smaller ventures.
Deepfakes: Treat As A Systemic Risk
The PSA calls for content provenance measures: mandatory disclosure, persistent identifiers, and cryptographic metadata at generation and distribution. Platforms should maintain usage logging, detect repeat offenders, and coordinate incident reporting.
This is less about takedowns after the fact and more about traceability at the point of creation.
Local Evaluation Over Imported Benchmarks
The report warns against relying on Western benchmarks that miss Indian languages, accents, and skin tones. India-specific evaluation is necessary for both accuracy and fairness.
For research teams, that means curating local datasets, stress-testing across dialects and contexts, and reporting limitations transparently.
What Labs, Institutes, And R&D Teams Should Do Now
- Stand up an internal incident reporting pipeline that maps cleanly to a national registry (fields, severity, remediation notes).
- Adopt a risk taxonomy aligned to India's contexts; don't copy-paste generic categories.
- Instrument systems for accountability: immutable logs, model cards, data lineage, attestations, and scheduled audits.
- Implement lifecycle controls: dataset governance, pre-deployment testing, post-deployment monitoring, and kill switches for agentic features.
- Build provenance into generative workflows: watermarking or cryptographic metadata, and clear user disclosures.
- Localise evaluation: multilingual test sets, accent robustness, demographic fairness checks, and bias impact reports.
- Prepare for human oversight and grievance redressal flows; document escalation playbooks.
- Leverage DPIs where possible to cut compliance costs and standardise attestations.
How This Aligns With Global Best Practice
The PSA recommends adapting global frameworks to India's needs rather than copying them wholesale. That's sensible-and actionable.
- Risk management: See the NIST AI Risk Management Framework for practical controls and assurance strategies.
- Principles and governance: The OECD AI Principles provide a policy baseline many regulators reference.
Why This Matters Now
The proposals land ahead of the India AI Impact Summit 2026 in New Delhi, expected to draw leaders from Nvidia, OpenAI, Google, and Anthropic. High-level interest won't mean much without clear operating rules.
A registry plus techno-legal controls gives the research community a common language with policymakers-so evidence drives decisions, not opinions.
Practical Next Step
If you're leading an AI research program, run a gap assessment against the items above and identify what you can implement in the next 30, 60, and 90 days. Start with logging, disclosures, and localized evaluation-they pay off immediately and reduce downstream risk.
For teams building skills and internal capability, here's a curated starting point: Latest AI courses at Complete AI Training.
Bottom line: Treat governance as part of engineering. If it's not encoded into your systems, it won't hold under pressure.
Your membership also unlocks: