How the AI race could play out for the US, China, and everyone else
The AI contest is still open, but three clear paths are taking shape. Each one carries different risks and opportunities for governments, budgets, and national capabilities.
Here's what could happen next-and what public leaders should do now.
Scenario 1: US-China dominance
The US still leads on big models and capital, while China pulls ahead in publications and patents. The gap in technical performance is narrowing fast, according to the Stanford AI Index. That puts both countries in a tight race on capability and deployment.
Economic models differ. China uses state-directed investment and has been constrained by high-end chips, but it's building capacity and finding workarounds-last year's low-cost models like DeepSeek showed what's possible under limits. The US runs on a powerful mix of tech giants, venture capital, and government programs-yet signs of a bubble and heavy debt financing for data centers raise stability questions.
- Implications for governments:
- Decide where you align on compute, standards, and supply chains-ambiguity will be costly.
- Build a domestic compute and energy plan: grid upgrades, siting, water, and permitting.
- Create a public data advantage-trusted access, quality curation, and privacy-by-design.
- Stress-test exposure to a tech correction: tax revenues, pensions, and local incentives tied to AI build-outs.
- Develop procurement paths for AI systems with clear performance, audit, and red-team requirements.
Scenario 2: A tripolar world with the EU as rule-setter
If an AI bubble deflates, the EU's rule-making weight grows. The EU AI Act's risk-based approach becomes the reference point many countries copy or negotiate against. That makes compliance, safety, and documentation core to adoption, not afterthoughts. See the Commission's overview of the EU AI Act.
The US, China, and Europe will likely keep distinct models, but there's room for shared guardrails on narrow issues like safety testing and incident reporting. As Jake Sullivan put it, it would be irresponsible for the US and China to race ahead without talking about risks and shared opportunities.
- Implications for governments:
- Adopt interoperable standards for model evaluation, red-teaming, and documentation.
- Stand up cross-border incident reporting and recall mechanisms for high-risk systems.
- Require third-party testing and certification for public-sector deployments.
- Use policy sandboxes to speed learning while keeping firm safety baselines.
Scenario 3: Systemic disruption resets the field
Breakthroughs that slash costs change who can compete. Low-cost large language models, more local processing to cut energy use, sparse models that focus compute where it matters, and synthetic data for training when real data is scarce-all of this can let mid-tier players catch up fast.
First-mover advantages fade. New alliances form. Growth spreads beyond today's hubs. Risks also spread as capable systems reach more actors, some with weak safeguards.
- Implications for governments:
- Update export controls and safety requirements as small models gain high capability.
- Fund efficiency research: algorithms, architectures, compression, and edge inference.
- Develop synthetic data programs with strict provenance, bias controls, and audits.
- Plan for misuse scenarios: procurement fraud, disinformation, biosecurity, and cyber.
What to watch in 2026
- Compute costs per token and per task; parity between open and closed models on key benchmarks.
- Chip supply chain progress: advanced lithography, packaging, and domestic fabs.
- Debt markets tied to data centers; signs of stress or covenant breaches.
- Energy prices, grid constraints, and water availability in new AI buildout zones.
- Adoption of synthetic data in regulated sectors and the quality gap vs. real data.
- Movement on global safety norms, test protocols, and incident disclosure.
No-regrets moves for public leaders
- Publish a national compute and energy plan linking capacity, siting, and resilience.
- Stand up an AI assurance stack: evaluations, red-teaming, model cards, and continuous monitoring.
- Upgrade procurement: outcome-based contracts, faster cycles, and kill-switch clauses.
- Build a public data program with strong privacy, security, and civic transparency.
- Run scenario planning across agencies; pre-plan responses to bubble pop or supply shocks.
- Invest in workforce skills-policy, technical, and operational. For role-based options, see these curated AI course paths.
The path ahead is uneven. Technology may outrun policy at times. Clear priorities, flexible execution, and steady capability building will keep your country in the game-whichever scenario lands first.
Your membership also unlocks: