Fujitsu and NVIDIA to Build GPU-CPU AI Chip by 2030 for FugakuNEXT, Data Centers, and Robotics
Fujitsu and NVIDIA will co-develop an energy-efficient AI chip by 2030, pairing GPUs with Arm CPUs for data centers, supercomputing, and robots. FugakuNEXT targets 5-10x gains.

Fujitsu and NVIDIA to Co-Develop Energy-Efficient AI Chip by 2030 for Supercomputing and Robotics
As reported by Nikkei and confirmed by Fujitsu, Fujitsu and NVIDIA plan to co-develop an energy-efficient AI chip by 2030. The effort targets Japan's data centers, robotics, and broader industrial use, with a focus on scaling performance and reducing energy costs. Investment size was not disclosed.
The design will integrate NVIDIA GPUs with Fujitsu CPUs on shared boards and servers. NVIDIA's high-speed interconnect will let multiple chips operate as a single processor, enabling tighter scaling for training, inference, and HPC workloads.
FugakuNEXT: 5-10x Leap Around 2030
The partnership centers on Japan's flagship Fugaku supercomputer (built by Fujitsu with the RIKEN research institute). RIKEN, Fujitsu, and NVIDIA announced plans for FugakuNEXT, targeted around 2030, with performance expected to be five to ten times greater than the current system.
Physical AI: From Labs to Factory Floors
The collaboration extends to "physical AI" for autonomous machines. Fujitsu, NVIDIA, and Yaskawa Electric have begun discussions to apply the new chips and software stack to Yaskawa's industrial robots.
Strategic Context
Fujitsu's Arm-based chips have mostly served national projects like Fugaku. The tie-up aims to bring that capability into commercial AI markets. The move is also a test of Japan's role in a global AI race dominated by U.S. and Chinese firms, and ties into Fujitsu's push for "sovereign AI"-building national AI systems from infrastructure through data governance.
Why This Matters for IT and Development Teams
- Architecture planning: Expect hybrid nodes combining Arm CPUs and NVIDIA GPUs with high-speed interconnect. Prepare for NUMA-aware scheduling and interconnect-aware topology.
- Toolchains: Ensure your stack runs cleanly on Arm64 + CUDA (C/C++, Python, container images for arm64, CI pipelines that cross-compile where needed).
- Orchestration: Validate Kubernetes/Slurm support for mixed CPU/GPU nodes and multi-node training that benefits from unified interconnect.
- Performance engineering: Profile for memory bandwidth, interconnect throughput, and energy per token/epoch. Optimize input pipelines and operator fusion to keep GPUs fed.
- Robotics: For "physical AI," align perception/planning stacks with GPU acceleration and real-time constraints. Plan for field updates, safety cases, and telemetry at the edge.
- Procurement and sustainability: Model TCO under energy-efficient targets; evaluate cooling, rack density, and PUE impacts well ahead of 2030 availability.
What to Watch Next
- Interconnect details and topology (how many chips per node, node-to-node fabric, memory coherence models).
- Software support on Arm: CUDA, cuDNN, compiler maturity, and container images for popular frameworks.
- FugakuNEXT milestones and early developer access programs or simulators.
- Industrial robotics reference designs and SDK updates aligned with Yaskawa integrations.
Sources
This article cites information from Nikkei and Fujitsu.
Upskill for Arm + GPU AI
Want to prep your team for Arm64 + CUDA and AI infra at scale? Explore role-based learning paths here: AI Courses by Job.