Jensen Huang Says China's AI Is Nanoseconds Behind, Calls US Chip Strategy Into Question
Jensen Huang says China's AI is "nanoseconds behind"-a tight race. Prepare for policy swings, dual-source compute, and portable, vendor-agnostic stacks.

"They're nanoseconds behind us." What NVIDIA's CEO is signaling about China's AI-and how to act
NVIDIA CEO Jensen Huang says China's AI is "nanoseconds behind" the US. That framing matters. It challenges the idea that the gap is wide and safe, and it raises execution risk for anyone betting on one geography, one vendor, or one policy staying still.
If you own AI strategy, procurement, or P&L, assume a close race. Build for volatility, multi-vendor futures, and policy-driven shocks.
What Huang actually said
Huang calls China "formidable, innovative, fast-moving, and underregulated." He rejects common myths: that China can't design chips, can't manufacture, or trails by years. His point: "They're nanoseconds behind us… we've got to go compete."
He also argues US firms should be allowed to compete globally: "It is our single best industry. Why would we not allow this industry to go compete for its survival?"
Policy and market moves reshaping supply
- US restrictions blocked sales of NVIDIA's H20 AI GPUs to China in April 2025, then allowed them again in July after lobbying. Demand spiked; Chinese officials later urged firms to avoid the H20 on security grounds.
- In August, NVIDIA and AMD reportedly agreed to send 15% of China AI-chip revenue back to the US government as part of export licensing. President Trump stated, "The H20 is obsolete."
- Chinese regulators barred top firms from buying or testing NVIDIA's RTX Pro 6000D and opened an antitrust review tied to the Mellanox acquisition.
Net effect: unpredictable permissions, fluctuating access, and new cost structures tied to export approvals.
Huawei's three-year plan to outscale
Huawei laid out a plan to overtake NVIDIA inside China. Its next Ascend chips will link up to 15,488 accelerators via a new "UnifiedBus," which Huawei claims is up to 62x faster than NVIDIA's next NVLink144.
Context: NVIDIA's current NVLink72 connects up to 72 Blackwell GPUs and 32 Grace CPUs. Huawei's strategy appears to be scale by numbers, tuned for China's software stack. For NVIDIA, protecting share in China (once ~95%) will require faster delivery, deeper partnerships, and compelling TCO per trained and served token.
What this means for your strategy
- Assume parity pressure: Plan as if Chinese AI capability will match or surpass in specific workloads within planning horizons. Your moat won't be raw FLOPs-it's data, distribution, and defensible use cases.
- Budget for policy risk: A 15% revenue share to the US on China sales behaves like a margin tax. Model it. Stress-test P&L against new fees, delays, or abrupt access changes.
- Dual-source compute: Where feasible, qualify at least two accelerator stacks (e.g., NVIDIA plus a domestic alternative per region). Abstract with frameworks that reduce vendor lock-in.
- Architect for swapability: Separate model layer, feature layer, and infra layer. Use containerization and fabric-agnostic orchestration. Keep interconnect choices open until you must commit.
- Localize by region: Expect stricter data residency and procurement guidance in China and allied markets. Build compliant, semi-autonomous stacks per region.
- Expect shortages: If H20 or other "compliant" parts are greenlit, demand will exceed supply. Reserve capacity early, diversify foundry and packaging exposure, and secure spares.
- Track interconnect roadmaps: System performance is now an interconnect story. Watch NVLink72 to NVLink144, Huawei UnifiedBus, and Ethernet alternatives. Bottlenecks shift TCO quickly.
- Manage software lock-in: Prioritize open formats (ONNX, vLLM, Triton, Megatron-LM equivalents). Keep fine-tuning, RAG, and deployment pathways portable.
Signals to watch (next 12-18 months)
- US export control updates and any expansion of revenue-share models beyond China.
- China procurement directives favoring domestic chips, and enforcement intensity.
- Huawei execution: real-world throughput, yield, and delivery timelines for Ascend and UnifiedBus SuperPods.
- NVIDIA delivery on Blackwell, NVLink144, and networking (Spectrum/X) schedules.
- Advanced packaging capacity (e.g., CoWoS) and lead times at TSMC and partners.
- Price curves: $/TFLOP and $/token for training and inference across vendors.
- Antitrust actions in China and elsewhere that could alter channel strategy.
Practical next steps
- 30 days: Map critical workloads to hardware-agnostic runbooks. Identify what must stay on NVIDIA vs. what can move. Create a dependency inventory (drivers, interconnects, compilers).
- 60 days: Run pilot deployments on an alternative stack in each key region. Measure tokens/sec, latency, failure modes, and operator workload. Negotiate contingent capacity with multiple vendors.
- 90 days: Update your AI P&L to reflect export fees, potential delays, and regional duplication. Lock in a procurement plan with triggers tied to policy events and product releases.
- Ongoing: Build your own "model portability tax" metric. If the cost to switch falls quarter over quarter, you're hedged. If it rises, you're slipping into lock-in.
Why Huang's stance matters
If the gap is measured in "nanoseconds," advantage flips on execution speed-sourcing, deployment, and iteration cycles-not slogans. Your edge comes from compressing cycle time while keeping options open.
Set policy volatility as a baseline assumption, then design for resilience. Those who treat this as a procurement problem will pay a premium. Those who treat it as a systems and option value problem will compound.
References and useful resources
Build internal capability
If you're upskilling teams for multi-vendor AI deployments and model ops, review curated programs by job role here: Complete AI Training.