Inside New Delhi's AI Summit: Impact Over Hype and a Louder Global South

At India's AI Impact Summit, leaders pushed evidence, Global South voices, and local-language access over flashy demos. The question lingered: who sets the pace-and to what ends?

Published on: Feb 25, 2026
Inside New Delhi's AI Summit: Impact Over Hype and a Louder Global South

India's AI Impact Summit: Who gets to set the direction?

New Delhi's streets were draped in optimism, but inside Bharat Mandapam the tone was sober. The core issue was clear: as AI remakes development, who decides the pace and priorities? "I don't see why the technology can't slow down," said Rachel Adams of the Global Centre on AI Governance. The host nation pushed a simple idea all week: listen to the global south and turn AI into real outcomes in agriculture, health, and education.

Governance and safety over hype

Panels focused less on demos and more on rules, incentives, and proof of value. "How do we work out what's the good AI and create the incentives, create the funding, create the policy frameworks that are going to push AI down that path?" asked Claire Melamed of the U.N. Foundation. The message: progress with evidence, not press releases.

Big names, bigger frictions

Leaders from tech and government, including Sundar Pichai, Emmanuel Macron, and Jimmy Wales, took the stage. Bill Gates was a high-profile no-show. A failed photo-op handshake between Sam Altman and Dario Amodei grabbed headlines, as did an Indian university's attempt to pass off a Chinese robot as its own.

Logistics struggled under an estimated 50,000 attendees. Security lines dragged, traffic froze, and some speakers missed their own panels. "There's so much excitement about being here - but it's kind of reinforced some of those silos," said Priya Vora of the Digital Impact Alliance, noting the best conversations often slipped into closed-door dinners.

Open access brought fresh energy

The upside of an open summit was obvious in the hallways: young builders everywhere, pitching tools for business and social good. For many, it was their first direct line to funders and policymakers. The energy was messy, but real.

Inclusion, local languages, and humans in the loop

This was the fourth major AI safety summit, but the first in the global south - a shift that matters. Standards have long skewed to those with time, money, and proximity to power. Bringing the forum to India widened the table and spotlighted the need for civil society participation.

On the ground, "meeting people where they are" meant WhatsApp and local languages. Rocket Learning uses AI-enabled WhatsApp to support parents and community childcare workers. Digital Green's FarmerChat now advises smallholders across 15 languages by voice. "To be able to converse to a product in a very natural way," said CTO Vineet Singh, "that is where it's a big game-changer."

Lacina Koné of Smart Africa put it bluntly: "Being smart does not mean being smart in English, French, or Spanish… You can be smart in your own language." India's Prime Minister Narendra Modi echoed a safeguard many repeated: keep humans responsible for final decisions. The U.N. emphasized equity, warning AI will otherwise amplify existing gaps. Laura Gilbert of the Tony Blair Institute called inclusion "war… and I don't accept an outcome in which we lose."

Sovereignty and open-source: building leverage without extraction

Macron cautioned countries against becoming data mines for foreign firms. With data treated as strategic fuel, extraction risks are rising, along with fears of surveillance and bias. Smart Africa is piloting "data embassies" for shared sovereignty and cross-border AI hubs to grow skills - an approach built on "cooperate on foundations, compete on performance." See Smart Africa for context.

Open-source came up as a public-good path. "Building in the public sector with open source is incredibly important," said Gilbert. Digital Green noted that open tools create faster improvement loops at scale. Priya Vora added a needed reset: "Of course you need compute… But none of that will matter unless you're pointing AI at real problems to solve."

An impact-first "third way"

The outcome document, signed by 88 governments and international bodies, backed equitable access and human-centric AI. There was still no consensus on global rules. "We totally reject global governance of AI," said U.S. adviser Michael Kratsios, arguing safety-focused regimes could choke competition.

Even without agreement on regulation, India advanced a different frame: optimize AI for measurable public good and development targets - not just speed or control. As Adams put it, many countries are starting from the question: how do we build safe, inclusive, trustworthy tools that help people meet real priorities?

That lens showed up in sector casebooks, new funding for evaluations, and a harder line on evidence. "If we don't measure those [impacts], we don't know which technology is positive, which is neutral, and which is negative," said Iqbal Dhaliwal of J-PAL. WHO's Alain Labrique pressed for scientific testing "from the training of models all the way to their deployment," to build trust and ensure safety. For reference, see WHO's Digital Health and Innovation work here.

What governments, IT leaders, and development teams should do next

  • Prioritize local language and voice access. Fund tools that work over WhatsApp, IVR, and low-end devices. Make bilingual/bidirectional support a default, not an add-on.
  • Keep humans in the loop. Set decision thresholds, escalation paths, and audit trails. Make one person accountable for every AI-enabled decision that affects people's lives.
  • Back proof, not pitches. Require outcome metrics, cost-effectiveness analysis, and independent evaluations before scale. Publish methods and results.
  • Adopt open standards and, where feasible, open-source components. Insist on procurement terms that allow audits, model cards, and secure data access logs.
  • Treat data as a strategic asset. Enforce consent, minimization, and local storage where appropriate. Explore regional data trusts or "data embassy" models to share benefits without ceding control.
  • Pool resources regionally. Co-invest in training, shared compute, and safety testing facilities. Coordinate on benchmarks and red-teaming.
  • Point AI at concrete backlogs: agronomy advice, disease surveillance, teacher support, benefits delivery, and grievance redress. Kill vanity pilots quickly.
  • Build talent pipelines for minority-language communities and frontline workers. Pay for community validators and translators as core infrastructure.
  • Mandate pre-deployment safety checks. Red-team for bias, hallucinations, security, and misuse. Set rollback plans and user feedback loops.

Useful resources

For public-sector leaders: AI for Government and AI for Policy Makers.

The India AI Impact Summit didn't settle the rulebook. It did something more practical: it put impact, equity, and evidence at the center - and made clear that direction-setting won't be reserved for a few capitals anymore.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)