Samsung and Nvidia team up on custom CPUs and XPUs to cement NVLink Fusion in AI data centers
Nvidia has partnered with Samsung Foundry to co-design and produce custom CPUs and XPUs, extending the reach of NVLink Fusion across rack-scale systems. Announced at the 2025 OCP Global Summit in San Jose, the move signals Nvidia's intent to be the connective layer every AI data center depends on.
For IT leaders and developers, this is a clear message: future AI infrastructure will be built around direct, high-speed links between CPUs, GPUs, and accelerators, with Nvidia setting the rules of engagement.
What NVLink Fusion brings to the stack
NVLink Fusion is an IP and chiplet solution that enables CPUs, GPUs, and accelerators to communicate directly at high bandwidth, reducing traditional data bottlenecks between compute components. It's engineered for rack-scale integration where throughput, latency, and predictable interconnect behavior matter.
Following an earlier collaboration with Intel, Nvidia confirmed that partners like Intel and Fujitsu can now build CPUs that speak NVLink Fusion natively. Samsung's role adds a full design-to-fab pathway for custom silicon tuned to AI workloads.
Open Compute Project discussions continue to push open hardware design, but Nvidia's approach with Fusion is intentionally guided to keep performance consistent across the ecosystem.
Ecosystem control: performance assurance vs. lock-in risk
According to reporting referenced in the announcement, Nvidia controls key pieces of the NVLink Fusion stack, including communication controllers, physical layers (PHY), and NVLink Switch licensing. Any custom chip built in this ecosystem must connect to Nvidia products under defined terms.
This control helps guarantee predictable performance and compatibility. It also concentrates leverage with Nvidia, which raises familiar questions for operators about openness, portability, and long-term optionality as competitors build their own silicon.
Why this matters for architects and platform teams
- Topology-first designs: Expect architectures that prioritize direct CPU-GPU and accelerator links, beyond standard PCIe paths, to keep training and inference pipelines fed.
- Chiplet-era flexibility: Custom CPUs/XPUs manufactured by Samsung could let hyperscalers tune core counts, cache, memory interfaces, and interconnect density around specific AI workloads.
- Ecosystem gravity: With Intel, Fujitsu, and now Samsung in the mix, NVLink Fusion becomes a strong default for heterogeneous compute inside GPU-centric racks.
For developers, this shifts how you think about data movement. The bottleneck isn't just kernel efficiency anymore-it's end-to-end topology, interconnect bandwidth, and how your scheduler exploits it.
Competitive backdrop
Rivals including OpenAI, Google, AWS, Broadcom, and Meta are investing in their own chips to reduce dependency on Nvidia. Nvidia's counter is to embed its IP deeper into the data center fabric so that even third-party silicon plugs into its ecosystem.
That strategy reframes Nvidia from a GPU vendor to an infrastructure partner-one that provides the interconnect, the controllers, and the switch fabric that everything else talks through.
What to watch next
- Silicon drops: Announcements of custom CPUs/XPUs fabbed by Samsung that advertise native NVLink Fusion connectivity.
- Licensing clarity: Details on NVLink Switch licensing and what that means for multi-vendor racks and procurement flexibility.
- Developer enablement: Tooling, SDK updates, and reference designs that expose Fusion's topology advantages without adding integration friction.
Practical steps for IT and development teams
- Evaluate AI cluster designs that place GPU adjacency and direct CPU-GPU links at the center, with PCIe as a complement where appropriate.
- Push vendors for specifics on bandwidth, latency, NUMA characteristics, and memory access patterns across NVLink Fusion-enabled nodes.
- Plan for lock-in risk: build procurement guardrails, test portability scenarios, and maintain a migration path if you diversify silicon down the line.
- Benchmark full pipelines (data ingest to inference) under realistic multi-tenant loads; topology-aware scheduling will matter more than synthetic microbenchmarks.
Bottom line
Nvidia's partnership with Samsung is about control of the interconnect-where performance is won or lost. If your roadmap includes AI at scale, start designing for a future where CPUs, GPUs, and accelerators communicate over a tightly governed fabric, and where vendor terms shape your options as much as raw FLOPs.
If you want structured learning paths for your team around AI systems and deployment skills, explore our curated AI courses.
Learn more about NVLink technologies directly from the source: Nvidia NVLink.
Your membership also unlocks: