China and Huawei are pulling ahead in AI by going open source, says Nvidia's Jensen Huang
Nvidia's CEO put it bluntly: China and Huawei are outpacing the US in AI because they leaned into open source. The US still leads in frontier, proprietary models-by roughly six months-but most of the 1.4 million AI models worldwide are open source. Scale and accessibility are doing the heavy lifting.
The claim, in plain terms
Huang's core point: open source accelerates adoption. Without it, startups stall, universities can't run real experiments, and scientists can't test ideas at the pace the field demands. He pointed to Linux, Kubernetes, and PyTorch as proof that open ecosystems compound progress.
His message to the US was clear: the top-tier models are great, but the wider market is choosing what anyone can use, modify, and ship. Whoever applies technology first wins the industrial revolution.
Energy and manufacturing: the hard constraints
Huang drew a line between AI ambition and physical limits. "China's energy production is twice that of the United States. Our economy is larger than theirs, but our energy production is only half. That just doesn't make sense."
He argued the US hollowed out parts of its industrial chain by offshoring manufacturing-and is now trying to bring it back. The catch: you can't build chip fabs, assembly plants, or AI data centers without massive energy. "It's a fact that we are several generations ahead in chips, but please don't be complacent. Chips ultimately lead to manufacturing, and no one should question China's manufacturing capabilities."
Why open source is tilting the scoreboard
- Startups: lower costs, faster iteration, easier product-market tests.
- Universities and labs: reproducibility, shared baselines, wider participation.
- Engineering velocity: shared tooling compounds (e.g., Kubernetes, PyTorch).
Open code and permissive licenses let teams ship now, not six months from now. That advantage scales across millions of models and thousands of organizations.
What this means for builders and researchers
- Go open-first when possible. Fine-tune public models, contribute fixes, and keep a clean fork strategy so you can upstream improvements.
- Design for portability. Containerize, keep infra-as-code, and avoid tight coupling to a single vendor's stack.
- Budget for energy and capacity. Training schedules, power availability, and data center slots will matter as much as GPUs.
- Expect supply-chain friction. Lead times on chips and racks will influence your roadmap-plan buffers and alternatives.
- Uplevel your team's core skills (MLOps, distributed training, evals). A focused skills path helps you ship reliably. See curated options by skill here: Complete AI Training.
The takeaway
The US may hold the edge in elite, closed models and advanced chips. China, backed by open-source momentum, energy output, and manufacturing depth, is moving fast at scale. If you build in AI, the smart move is simple: participate in open source, design for constraints, and ship where availability-not just theory-wins.
Your membership also unlocks: