India charts its own course on AI regulation
India has released a new set of AI guidelines that lean on existing laws instead of building a heavy, separate framework. The goal: move fast, keep people safe, and let local builders keep shipping. A national AI summit is slated for early 2026, signaling that policy is going to accelerate.
The guidelines point to the Information Technology Act and the Digital Personal Data Protection Act (DPDP) as the core guardrails for risks like deepfakes, unauthorized data use, and misuse at scale. The headline: govern how AI is applied, not the technology itself.
What the guidelines say
- Use existing laws. The IT Act and DPDP Act are the backbone for enforcement, instead of a standalone AI law.
- "Do No Harm." A principle-based approach encourages responsible deployment without freezing progress.
- Content authentication. Platforms must visibly label AI-generated or synthetically modified content. Visual labels should cover at least 10% of the display area; audio labels should be audible for at least 10% of the duration.
- Voluntary safeguards. Preference for self-regulation and flexible measures over rigid, upfront compliance.
As one of the guideline authors notes, this approach aims to balance innovation and safety while avoiding blanket, prescriptive rules.
Adoption is already high
India's workforce is moving fast. A recent Boston Consulting Group report estimates that 92% of employees in customer service, operations, and production roles in India already use AI at work-well above the global average of 72%.
From health care and agriculture to fintech and public services, AI is now part of day-to-day execution. Leaders in the ecosystem argue that we are still early-more "dial-up era" than maturity-so education, awareness, and upskilling matter as much as guardrails.
How India's model differs
Compared to the EU's mandatory, risk-classified approach, India favors flexibility, speed, and context. China leans on technology-specific controls, and the US remains fragmented. India is focused on governing use cases.
One founder framed it simply: think of the EU like airport security-mandatory and slow. India is closer to metro security-quick, situational, and built for scale. In practice, a fintech in Bengaluru can ship an underwriting model faster under voluntary safeguards than under strict, pre-market classifications. The same logic helps edtechs run limited pilots without months of paperwork.
Gaps and risks to watch
Critics say the guidelines are light on implementation details. How will data usability and sharing improve, especially with the long-standing problem of poor public-sector data quality?
There's also a narrow view of risk in places. Labor displacement, psychological harms, environmental costs, and market concentration get little attention. These are live issues that need a plan, not just principles.
The legal question
Another concern: the guidelines are not binding. There are no legal consequences for ignoring them. That weakens accountability in high-stakes deployments.
Some legal experts argue for a dedicated AI law to address liability, transparency, accountability, and the status of AI systems in legal processes-including whether certain AI agents should have limited, conditional recognition to clarify responsibility.
What public leaders and teams can do now
- Map your AI use cases and risks. Classify by impact on citizens, critical services, and rights. Prioritize oversight for high-impact systems.
- Adopt voluntary safeguards. Human-in-the-loop for sensitive decisions, bias testing, security reviews, and audit trails.
- Prepare for content labeling. Build workflows to detect, label, and log synthetic media. Include 10% visual/audio labeling in product and comms standards.
- Fix data quality at the source. Set minimum data standards, documentation, and access controls for public datasets. Budget for cleanup, not just new models.
- Upskill your workforce. Train teams on safe prompts, verification, privacy, and model limitations. Focus on managers and service delivery staff.
- Create procurement guardrails. Require vendors to disclose training data sources, known limitations, evaluation results, and incident response plans.
- Run time-boxed sandboxes. Pilot critical AI services with clear success metrics and red lines. Publish what you learn.
- Set citizen-facing policies. Plain-language notices, opt-outs where feasible, and clear grievance redressal channels.
- Measure outcomes, not hype. Track accuracy, fairness, service uptime, user complaints, and cost-to-serve. Review quarterly.
- Coordinate early. Work with sector regulators, CERT-In, and data protection bodies before large-scale rollouts.
Resources
- Digital Personal Data Protection Act (Official Gazette)
- EU AI Act overview (European Parliament)
- Practical AI upskilling by job role (Complete AI Training)
Bottom line
India is betting on speed and accountability through existing laws, with principles that encourage responsible use. That can work-if agencies tighten execution, improve data quality, and give the guidelines teeth where it counts. The next year will be about proof, not promises.
Your membership also unlocks: