From Davos to New Delhi: AI Governance in a Fractured Global Order
Strip away the Cold War throwbacks and oil talk. The contest that matters now is over data, compute, and control. Nations are repositioning around who owns the pipes, the models, and the minerals that feed them.
That reality showed up in policy, not just panels. The latest U.S. National Security Strategy elevates data infrastructure and AI systems to strategic assets. Canada's response-under Prime Minister Mark Carney-leans into "sovereign AI," ramping domestic compute, courting partners, and treating AI as core economic policy.
The quiet contest: data, compute, and control
AI capacity is concentration. Compute footprints, proprietary data, and export regimes decide who sets terms and who takes them. Moves like Canada's push to localize infrastructure and collaborate on AI safety reflect a broader middle-power tactic: reduce single-country dependency without burning alliances.
Trade and tech are fused now. Deals that once looked like commerce read more like security. That is true in North America, across Europe, and throughout Asia and Africa.
Middle powers between orders
At Davos, Mark Carney named a long-running tension: the rules-based order never served everyone equally. For much of the Majority World, that was obvious-hence BRICS' rise and its recent expansion.
The break with old narratives isn't nostalgia. It's arithmetic. When benefits cluster with incumbents, new blocs emerge. The question is whether middle powers can deliver something more equitable-or just a lighter version of the same hierarchy.
Institutions under strain-and why that matters for AI
WTO, IMF, World Bank-stability anchors for some, mixed records for many. For countries long boxed in by colonial legacies, tied aid, and uneven trade rules, the bigger fear now is volatility: currency shocks, supply disruptions, and digital dependency.
Add a critical layer: digital public goods. Identity rails, payments, cloud, and broadband are as material as ports and power. Without fair rules for access and use, AI will widen inequality-fast.
G7, G20, and the limits of consensus
The French G7 presidency faces conflicts that test solidarity and priorities, including a T7 track centered on AI for middle powers. The G20, under U.S. leadership, sidelined South Africa's bolder reform agenda in 2025-even as its Leaders' Declaration nodded at renewable finance, critical minerals, and debt relief.
Across working groups, strong Ministerial Declarations became weaker Chair's Statements. The digital track followed suit. Outcomes signaled a ceiling: when dominant players and conventional knowledge brokers hold the pen, equity language survives; action points thin out.
New Delhi's test: can the Global South reset the AI agenda?
Indian Prime Minister Narendra Modi opened the India AI Impact Summit 2026 in New Delhi with the Global South in focus. Expectations are clear: move beyond warnings about dual-use AI and put inequality, access, and capacity on equal footing with safety and security.
The 2025 BRICS statement warned that a flood of overlapping governance efforts-without real coordination-could deepen asymmetries and undercut multilateralism. New Delhi has a chance to translate that caution into a plan grounded in resourcing, representation, and accountability.
Policy options currently on the table
- Sovereign compute, shared standards: Build regional compute hubs with open interfaces. Pool demand to lower cost, avoid lock-in, and keep auditability on the table.
- Data governance for equity: Enable trusted cross-border flows with public-interest guardrails. Prioritize data access for research, health, agriculture, and climate, not just commercial scale.
- Safety and security with proportionality: Align incident reporting, model evaluation, and dual-use controls. Avoid regimes that freeze out smaller states from research and access.
- Procurement as leverage: Use interoperability, portability, and open licensing requirements to keep markets contestable. Tie public money to transparency and local capacity-building.
- Digital public infrastructure (DPI): Fund identity, payments, connectivity, and cloud baselines as public goods, not vendor funnels. Link DPI to service delivery, not just pilots.
- Finance and minerals: Tie concessional finance to fair terms on critical minerals and battery supply chains. Pair with debt relief mechanisms that protect fiscal space for digital investment.
- Representation with teeth: Expand voting power and technical seats for the Majority World in standard-setting bodies and AI safety forums. Rotate secretariats; publish dissent, not just consensus.
- Inequality metrics in every AI program: Require ex-ante and ex-post distributional impact checks. Fund remediation when harms land on marginalized groups.
What to watch in 2026
- Compute alliances: Regional facilities, cloud credits for research, and cross-border energy deals that anchor GPU supply.
- Standards and testing: Convergence on model evaluations, incident taxonomies, and provenance signals that smaller states can implement.
- Multilateral reform: Concrete moves to rebalance seats, budgets, and mandate scope in digital and financial institutions.
- Procurement shifts: National frameworks that mandate interoperability, source code escrow, and exit rights for high-risk systems.
- Global South coalitions: Joint positions across BRICS, AU, ASEAN, CELAC on data access, compute financing, and safety cooperation.
The bottom line
The rupture is real. AI is the leverage point where control of data, compute, and rules will set development paths for decades. Grand statements won't fix the gap between principles and practice.
Progress looks like this: credible finance for DPI, shared compute tied to open standards, enforceable transparency in public AI, and real seats for those who live with the outcomes. Anything less keeps the old order with new branding.
Further reading: For reference frameworks and processes shaping debate, see the OECD AI Principles and the UN High-level Advisory Body on AI.
Your membership also unlocks: