Modi calls AI a shared global good, unveils M.A.N.A.V. vision at India AI Impact Summit 2026

At India AI Impact Summit 2026, Modi urged AI for all-especially the Global South-with people in charge. New safety norms and 38,000 bookable GPUs give teams real lift.

Categorized in: AI News IT and Development
Published on: Feb 20, 2026
Modi calls AI a shared global good, unveils M.A.N.A.V. vision at India AI Impact Summit 2026

India AI Impact Summit 2026: Build systems that serve everyone

Prime Minister Narendra Modi set a clear north star at the India AI Impact Summit 2026: democratize AI and make it a tool for inclusion, especially for the Global South. His message to builders was simple-humans and intelligent systems will co-create, co-work, and co-evolve, but direction and control must stay in human hands.

AI can multiply human capability. That's the opportunity. The responsibility is to make it safe, shared, and useful in the real world-not just in labs.

The M.A.N.A.V. vision - a compass for builders

  • Moral and Ethical Systems: Align data, training, and deployment with clear guardrails and values.
  • Accountable Governance: Explainability, logs, approvals, and audits baked into pipelines.
  • National Sovereignty: Secure data residency, local compute, and compliance-first design.
  • Accessible and Inclusive: Multilingual UX, low-bandwidth modes, assistive features.
  • Valid and Legitimate: Strong evaluations, real-world testing, and transparent reporting.

Standards, trust, and safety: what changes for dev teams

Deepfakes and fabricated content are hitting open societies. Expect stronger norms around authenticity labels, watermarking, and source integrity. Provenance will move from "nice to have" to required infrastructure.

  • Adopt content provenance standards such as C2PA for images, video, and text outputs.
  • Ship authenticity labels in product surfaces; keep watermarking on by default for synthetic media.
  • Implement multilingual and cultural evaluations, not just English benchmarks.
  • Keep a human in the loop for high-stakes workflows; enforce approval gates and audit trails.
  • Use an AI risk framework (e.g., NIST AI RMF) to drive design reviews and continuous monitoring.
  • Make child safety a requirement, like school syllabi: filters, age-aware experiences, and parental controls.

Compute as a public good: 38,000 GPUs you can book

India's Common Compute Platform offers access to 38,000 GPUs for startups, researchers, academic institutions, and students-with 20,000 more coming. That lowers the barrier for training, fine-tuning, and large-scale evaluation.

  • Plan workloads with clear SLOs: fine-tune windows, evaluation runs, and inference SLAs.
  • Optimize costs: distillation, LoRA, 4/8-bit quantization, and mixture-of-experts where it makes sense.
  • Push evaluations early: stress-test for Indic languages, low-resource scripts, and code-switching.
  • Build privacy into data pipelines: consent checks, PII scanning, and retention controls.

Global signals from Macron and Guterres

France's President Emmanuel Macron highlighted AI as a core enabler for healthcare, energy, mobility, agriculture, and public services-calling out India's massive digital infrastructure: digital ID for 1.4B people, 20B monthly payment transactions, and 500M digital health IDs.

UN Secretary-General Antonio Guterres called for AI that "belongs to everyone," proposing a global fund to build foundational capabilities in developing countries. Translation: more inclusive compute, data, and talent pipelines are on the table.

New Delhi Frontier AI Impact Commitments: what to expect

  • Evidence-based policy from real usage: Anonymized, aggregated insights to study jobs, skills, and economic impact. Teams should prepare privacy-preserving telemetry, redaction, and k-anonymity practices.
  • Stronger multilingual and contextual evaluations: Demonstrate performance across languages, cultures, and live use cases-especially for the Global South. Create evaluation suites with human raters, synthetic tests, and scenario-based probes.

What this means for IT and development teams

  • Ship content provenance: watermarking, signature chains, and visible labels for generated media.
  • Own your evals: create internal leaderboards for Indic languages and critical tasks (support, forms, governance).
  • Guardrails by default: toxicity, bias, and child-safety filters at prompt, model, and output layers.
  • Human oversight: routing for expert review on sensitive actions (financial, legal, medical, civic).
  • Data lifecycle discipline: consented datasets, retention limits, reproducible training manifests.
  • Compute orchestration: queueing, preemption, and priority plans for shared GPU access.
  • Local-first options: on-device or edge inference where privacy or latency demands it.
  • Open code and shared development: contribute tools, eval sets, and safety modules to speed up quality and trust.

India's builder advantage

Semiconductors, secure data centers, a strong IT backbone, and a fast-moving startup network make India a natural hub for affordable, scalable, and secure AI systems. If a model works well across India's diversity-language, context, and connectivity-it's more likely to generalize globally.

The invitation is clear: "Design and Develop in India, Deliver to the World, Deliver to Humanity." Start with practical pilots, publish your evaluations, and keep people in command of the system.

Further resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)