Chinese Tech Giants Shift AI Training Overseas to Tap Nvidia Chips, Sidestepping U.S. Curbs

Chinese tech giants are training AI models offshore to reach Nvidia chips amid US curbs. Alibaba and ByteDance head to SE Asia, while DeepSeek leans on stockpiled GPUs and Huawei.

Published on: Nov 27, 2025
Chinese Tech Giants Shift AI Training Overseas to Tap Nvidia Chips, Sidestepping U.S. Curbs

Chinese tech firms take AI model training offshore to access Nvidia chips

Top Chinese companies are training new AI models outside China to access Nvidia hardware and sidestep U.S. restrictions, according to the Financial Times. The report points to a steady rise in offshore training since the U.S. moved in April to limit sales of Nvidia's H20 chip.

Alibaba and ByteDance are among those running training in Southeast Asian data centres, the FT said. Most firms are using lease agreements with facilities owned and operated by non-Chinese entities to stay within current rules.

DeepSeek is the outlier. It reportedly trained domestically after stockpiling Nvidia chips prior to the export bans, and is now working with local chipmakers led by Huawei to develop the next wave of Chinese AI accelerators.

The FT's account had not been independently verified at the time of writing. The companies named did not respond to requests for comment.

Why this matters for IT and development teams

  • Compute access shifts: Expect more large training runs happening offshore, while inference and fine-tuning remain closer to users and data.
  • Compliance-first setups: Clear separation of training locations, data residency, and operators is becoming standard practice.
  • Cost and latency trade-offs: Cross-border data movement, checkpoint syncing, and network egress can lift both complexity and spend.
  • Vendor and policy risk: Adjust roadmaps for sudden changes in chip supply or export rules. Keep fallback options ready.

What to watch next

  • Further U.S. actions or clarifications from the Bureau of Industry and Security. See overview of controls on advanced computing at bis.doc.gov.
  • Nvidia product adjustments for China-facing chips and the availability of alternatives.
  • Progress of Huawei-led accelerators and supporting software stacks.

Practical takeaways

  • Separate training vs. deployment early in your architecture to keep options open across regions and providers.
  • Abstract GPU vendors behind orchestration layers (K8s, Slurm, Ray) and maintain multi-cloud images to reduce switching costs.
  • Track model checkpoint portability and reproducibility so you can relocate training without losing weeks of work.
  • Build a simple policy watchlist (chips, export rules, data residency) and review it quarterly with procurement and security.

If you're upskilling teams for AI infrastructure, MLOps, or compliance-aware development, explore role-based learning paths at Complete AI Training - Courses by Job or developer-focused certifications like AI Certification for Coding.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide