Ethernet Switch Sales Triple On Hyperscale AI Demand: What Sales Teams Need To Do Now
Ethernet switch sales for AI back-end networks more than tripled in 2025 and captured over two-thirds of AI data center switching by year-end, according to Dell'Oro Group. The spending is skewed to high-speed gear as hyperscalers and large AI firms scale clusters at breakneck pace.
Translation for sales: budgets are real, timelines are short, and buyers are prioritizing availability, performance, and a clean path to the next speed bump.
Where the money is: 800G now, 1.6T next
800G accounted for the bulk of shipments and revenue in AI back-end networks in 2025. Hyperscalers are standardizing on 800G now and planning their jump to 1.6T as soon as hardware is ready.
Vendors including Synopsys, Edgecore Networks, and Marvell have 1.6T lines queued up, with shipments expected in the second half of 2026. Build your pitch around immediate 800G delivery with a clear, low-friction upgrade path to 1.6T.
Market share heat: who's winning
In 2025, Celestica and Nvidia combined for roughly 50% share in AI back-end Ethernet switching. Arista ranked third, with some AI-related revenue deferred into later periods.
Cisco is accelerating shipments with large hyperscalers, and HPE/Juniper landed new accounts. Expect active new entrants and share shifts as buyers push for vendor diversity across chips and systems.
- Lead with supply confidence and delivery dates. Speed wins deals; stock closes them.
- Position multi-vendor architectures to meet diversity mandates and mitigate risk.
- Anchor ROI to cluster growth: fewer network bottlenecks, higher GPU utilization, faster time-to-train.
- Offer a two-step plan: 800G now, mapped upgrades to 1.6T in 2026-2027.
Why Ethernet is surging in AI back-end networks
The growing size of AI clusters and ongoing supply constraints are pushing buyers to Ethernet for interoperability and vendor choice. Amazon, Microsoft, Meta, Oracle, and xAI are all adopting Ethernet for AI back-end networks.
While InfiniBand continues to grow, Ethernet is expanding faster, with Dell'Oro projecting nearly $80B in Ethernet switch sales over the next five years and more than $100B in related investments by 2030. The industry is aligning Ethernet with lossless features and congestion controls to support high-performance workloads.
Two efforts to reference in conversations: the Ultra Ethernet Consortium and the Open Compute Project's new Ethernet for Scale-Up Networking spec (OCP ESUN). Both signal confidence that Ethernet can handle large GPU domains with mechanisms to reduce packet loss.
Sales playbook: key questions to qualify
- Speed roadmap: Are you standardizing on 800G now? What's the timeline for 1.6T?
- Scale: How many GPUs per cluster today, and what's the target by year-end and 12-24 months out?
- Network objectives: Training vs. inference mix, east-west traffic patterns, tolerance for packet loss.
- Diversity policy: Do you require multiple vendors at the chip and system levels?
- Standards: Are UEC or OCP ESUN features part of your requirements?
- Dependencies: Optics availability, cabling preferences, and data center power/thermal limits.
- Procurement: Budget windows for 2026 H2 and 2027, delivery deadlines, and acceptance criteria.
Common objections and how to respond
- "We're worried about congestion and packet loss." - Highlight lossless Ethernet features, link-level congestion control, and emerging ESUN-aligned designs that reduce drops in scale-up GPU fabrics.
- "We can't afford downtime." - Lead with tested reference designs, multi-vendor spares strategy, and proven 800G production deployments while planning a controlled migration to 1.6T.
- "We're concerned about lock-in." - Emphasize Ethernet interoperability, open ecosystems, and the active push for vendor diversity at both chip and system tiers.
Who to prioritize
Top targets: hyperscalers and large AI infrastructure buyers scaling GPU clusters-Amazon, Microsoft, Meta, Oracle, and xAI. Also track OEMs and new entrants pursuing AI-dedicated data centers that need fast delivery and a standards-based path to 1.6T.
Next steps for your pipeline
- Build 800G inventory-backed offers with clear upgrade terms to 1.6T (pricing bands, trade-in credits, install windows).
- Package optics, cabling, and services to remove friction and compress deployment timelines.
- Map accounts to multi-vendor options and prepare swap-in scenarios to meet diversity policies.
- Time proposals around H2 2026 shipment ramps for 1.6T while closing 800G now.
If you sell into technical buyers and want sharper discovery, demos, and deal strategy for AI infrastructure, explore the AI Learning Path for Technical Sales Representatives.
Your membership also unlocks: