NVIDIA invests $5B in Intel to co-develop AI infrastructure and PC chips
NVIDIA will invest $5B in Intel at $23.28 per share to co-develop AI data center and PC platforms. The plan uses NVLink; pending approvals, no timeline.

NVIDIA invests $5B in Intel to co-develop AI data center and PC platforms
Announced September 18, 2025 (7:00 AM EDT), NVIDIA plans to invest $5 billion in Intel common stock at $23.28 per share. The investment pairs with a collaboration to build custom AI infrastructure for data centers and integrated CPU-GPU products for PCs.
The companies will target hyperscale, enterprise, and consumer segments. A joint press conference and a combined press release outlined the plan, which remains subject to regulatory approvals. No product timeline was provided.
What was announced
For data centers, Intel will build custom x86 CPUs that NVIDIA will integrate into its AI infrastructure platforms. For PCs, Intel will manufacture x86 system-on-chips (SoCs) that integrate NVIDIA RTX GPU chiplets for systems that need CPU and GPU on a single package.
The architectures will connect through NVIDIA NVLink, bringing higher-bandwidth pathways between NVIDIA's AI and accelerated compute stack and Intel CPUs within the x86 ecosystem.
Signals for product teams
- Convergence of CPU and GPU planning: Expect tighter CPU-GPU co-design, packaging considerations, and interconnect-first system architecture choices.
- SKU strategies shift: Integrated RTX chiplet SoCs may create new PC tiers while influencing discrete GPU attach rates and thermals.
- Platform lock-in risk: NVLink-first integrations could favor NVIDIA accelerator roadmaps; weigh against multi-vendor strategies.
- Supply chain and NPI: Joint programs add dependency management across two Tier-1 vendors-model for long-lead items and regulatory timing.
Data center implications
- System design: Expect configurations that optimize NVLink pathways for CPU-accelerator data flow and memory locality.
- Workload mapping: AI training and inference stacks may see lower CPU-GPU transfer overhead, improving utilization if software support lands as promised.
- Procurement: If successful, this creates a new option alongside existing x86 + accelerator builds. Plan pilot allocations, not fleet-wide swaps.
PC product implications
- Integrated graphics redefined: RTX chiplets inside x86 SoCs could compress mid-tier laptop and SFF desktop configurations.
- Thermals and form factors: New cooling envelopes and board layouts will be required to handle integrated GPU performance in thin designs.
- Software stack: Driver, firmware, and power management must align out of the gate; early OEM design wins will set expectations.
What's confirmed vs. open
- Confirmed: $5B NVIDIA investment in Intel stock at $23.28 per share; joint development for data center CPUs and PC SoCs; NVLink used to connect architectures; regulatory approvals required.
- Not disclosed: Product specs, performance targets, manufacturing nodes, thermal envelopes, pricing, and launch windows.
Executive context
NVIDIA's CEO framed the collaboration as tightly coupling its AI and accelerated compute stack with Intel's CPUs and the x86 ecosystem. Intel's CEO said the partnership complements NVIDIA's position to enable new industry breakthroughs. Both leaders discussed the announcement in a joint press conference.
Action plan for the next 90 days
- Kick off architecture reviews for NVLink-based CPU-GPU topologies; identify candidate workloads and platforms for pilots.
- Define decision gates: performance targets, TCO models, supply risk thresholds, and software readiness criteria for adoption.
- Engage vendor teams to map compatibility with current racks, chassis, and thermal budgets; request preliminary integration guides.
- Prepare fallback paths (multi-vendor accelerators, standard PCIe designs) until timelines and pricing are concrete.
- Upskill teams on interconnect-aware system design and chiplet-era packaging. For structured options, see AI courses by job role.
What to monitor
- Regulatory milestones and any conditions that affect joint product scope.
- Early OEM and hyperscaler design commitments that signal maturity and time-to-market.
- Developer toolchains, driver stacks, and platform software roadmaps aligned to NVLink-connected x86 systems.
Bottom line: this move points to tighter CPU-GPU integration across data center and PC lines. Treat it as a serious roadmap input, but wait for hard specs and early silicon before committing major volumes.