OpenAI and Broadcom Announce 10GW Custom AI Accelerators, Ethernet-Scaled Clusters Rolling Out 2026-2029

OpenAI and Broadcom to deploy 10 GW of custom AI accelerators over Ethernet from 2026-2029. Prep facilities, Ethernet fabrics, and software for mixed-accelerator clusters.

Categorized in: AI News IT and Development
Published on: Oct 14, 2025
OpenAI and Broadcom Announce 10GW Custom AI Accelerators, Ethernet-Scaled Clusters Rolling Out 2026-2029

OpenAI and Broadcom to deploy 10 GW of custom AI accelerators: what IT and dev teams should plan for now

OpenAI and Broadcom announced a multi-year collaboration to deliver racks of OpenAI-designed accelerators paired with Broadcom Ethernet-based networking. Deployments are targeted to begin in the second half of 2026 and run through the end of 2029 across OpenAI facilities and partner data centers.

The two companies signed a term sheet covering racks that integrate custom accelerators with Broadcom's end-to-end connectivity-Ethernet, PCIe, and optical. By designing its own chips, OpenAI aims to embed learnings from frontier models directly into hardware to raise capability and efficiency at cluster scale.

Key facts

  • 10 gigawatts of OpenAI-designed accelerators to be developed and deployed with Broadcom.
  • Racks start shipping H2 2026; rollout completes by end of 2029.
  • Ethernet-first fabric for scale-up and scale-out; Broadcom provides Ethernet, PCIe, and optical connectivity.
  • Term sheet signed, building on long-standing co-development and supply agreements.
  • Deployments across OpenAI sites and partner data centers; OpenAI reports 800M+ weekly active users.

Why Ethernet matters for your stack

The collaboration leans into standards-based Ethernet for both scale-up and scale-out, signaling a push for open, widely available fabrics over proprietary alternatives. Expect emphasis on congestion control, lossless modes, queue management, and host-to-host latency tuning to meet training and inference SLAs.

For network teams, this points to deeper adoption of features like ECN, PFC/DCB, traffic engineering, and telemetry-driven tuning. For platform teams, it suggests maturing support for Ethernet-optimized collective ops and transport layers across frameworks and runtimes.

Architecture notes

Racks will combine custom accelerators with end-to-end Ethernet, PCIe, and optical links to support high-bandwidth collective communication and I/O. The design focus is standardization at the fabric layer and tight hardware-software co-design at the accelerator and system level.

For engineering leaders, this is a prompt to stress-test cluster managers and schedulers against heterogeneous accelerators. Keep your abstraction layers clean: device plugins, runtime shims, and framework backends should allow you to integrate new silicon without rewriting everything above it.

Timeline and capacity planning

With deployments starting in H2 2026 and scaling through 2029, infrastructure teams have a clear window to prepare facilities, networks, and software pipelines. Upfront planning reduces integration risk and shortens the time from hardware arrival to productive capacity.

  • Facilities: model electrical capacity, high-density rack layouts, and advanced cooling options; lock in utility and mechanical upgrades early.
  • Networking: run PoCs for Ethernet-based AI fabrics; validate ECN, PFC/DCB, QoS policies, and telemetry. Prove line-rate performance under collective-heavy workloads.
  • Platform: prepare Kubernetes device plugins, NUMA-aware scheduling, and per-tenant QoS. Validate storage bandwidth and checkpoint pipelines at scale.
  • Observability: expand flow telemetry, queue depth visibility, PFC storm detection, and end-to-end latency tracing across host, NIC, switch, and fabric.
  • Supply chain: align procurement, spares, and RMA processes; simulate failure domains and replacement policies to maintain SLA.

What leaders said

"Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses," said Sam Altman, co-founder and CEO of OpenAI.

"Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity."

"Broadcom's collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence," said Hock Tan, President and CEO of Broadcom.

"By building our own chip, we can embed what we've learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence," said Greg Brockman, OpenAI co-founder and President.

Charlie Kawwas, Ph.D., President of Broadcom's Semiconductor Solutions Group, noted that custom accelerators pair well with standards-based Ethernet for cost and performance optimized next-generation AI infrastructure, with racks built on Broadcom Ethernet, PCIe, and optical connectivity.

What IT and dev teams can do now

  • Benchmark collective-heavy training jobs on Ethernet fabrics; tune ECN/PFC and verify tail latency under load.
  • Abstract accelerator dependencies in code paths; keep framework and runtime interfaces modular to integrate new backends faster.
  • Harden data pipelines: prefetch, shuffle, and checkpoint throughput should match expected cluster scale.
  • Model TCO scenarios with Ethernet-based fabrics; account for optics, switch upgrades, and host NIC offloads.
  • Plan for talent: upskill network, platform, and MLOps teams on Ethernet-centric AI clusters and heterogenous accelerator operations.

Upskill your team

If you need structured learning paths for engineers building and operating AI infrastructure, explore:


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)