AI networking boom pushes Cisco past estimates, raises outlook

Cisco beat on earnings as AI-ready networking drove orders, including $1.3B from hyperscalers, and shares popped ~7% after hours. Sellers: tie upgrades to model velocity and ROI.

Categorized in: AI News Sales
Published on: Nov 13, 2025
AI networking boom pushes Cisco past estimates, raises outlook

Cisco's AI-Fueled Quarter: What Sales Teams Can Do With It Right Now

Cisco beat earnings and revenue expectations on the back of demand for AI-ready networking gear. Adjusted EPS hit $1 (vs. $0.98 expected), up from $0.91 last year. Revenue rose 8% year over year to $14.88 billion. Shares jumped nearly 7% after hours.

The signal is simple: AI infrastructure spending is here, and it's driving real orders. Hyperscalers placed $1.3 billion in AI infrastructure orders, pushing product sales up 10%. Cisco also lifted its 2026 outlook to EPS of $4.08-$4.14 and revenue above $60 billion-pointing to a multi-year upgrade cycle.

Numbers sellers can quote

  • EPS: $1 vs. $0.98 expected (FactSet), up from $0.91.
  • Revenue: $14.88B, +8% year over year.
  • Product sales: +10% on strong AI demand.
  • AI orders: $1.3B from hyperscalers.
  • Stock reaction: ~+7% after hours.
  • FY26 outlook: EPS $4.08-$4.14; revenue >$60B.

Why this matters for sales

  • Budgets are moving to AI-ready networks. Buyers are prioritizing bandwidth, low latency, and security for AI workloads.
  • Refresh cycles are accelerating. Legacy switching, routing, and data center interconnect can't support AI traffic at scale.
  • Multi-year deals are back. Larger, phased programs (design → pilot → scale) are easier to justify with clear workload growth.

Sales plays to run this week

  • Target signals: Accounts hiring ML engineers, building data lakes, or expanding colocation footprint. Look for GPU cluster mentions in earnings calls.
  • Talk track: "Your AI pipeline is only as fast as your network. We can cut inference bottlenecks and reduce retrain windows with higher throughput and better east-west traffic management."
  • ROI angle: Tie network upgrades to time-to-model and cost-per-inference. Faster training and lower congestion beat generic "performance" claims.
  • Land-and-expand: Start with spine/leaf upgrades or fabric redesign in one data hall. Add observability and zero-trust later.
  • Executive story: "Capex today prevents opex waste tomorrow-idle GPUs cost more than modern switches."
  • Partner motion: Bring in cloud, storage, and security partners to package a clear path from POC to production.

Discovery questions that open doors

  • Which AI workloads are production vs. pilot, and where are they bottlenecked (IO, latency, throughput)?
  • How often are model retrains delayed due to network congestion or change windows?
  • What's your target cost-per-inference and time-to-deploy for new model versions?
  • How are you segmenting GPU clusters and securing east-west traffic?
  • Where do you need observability to prove capacity and SLA compliance?

Common objections and quick counters

  • "We can wait." Waiting while GPUs sit idle wastes budget weekly. A measured, phase-one upgrade removes the worst bottlenecks first.
  • "We're multi-cloud; it's complex." That's exactly why standardized, AI-ready fabric and policy automation help. Fewer one-off fixes, more predictable performance.
  • "Security will slow us down." Modern segmentation and telemetry can be baked in without adding hops that drag latency.

Who to prioritize

  • Hyperscaler-adjacent: SaaS, fintech, adtech, and gaming with GPU footprints and spiking east-west traffic.
  • Data-heavy enterprises: Retail, healthcare, telecom, and manufacturing running computer vision, RAG, or personalization at scale.
  • Public sector and EDU: Research clusters with predictable grant timelines and transparent roadmaps.

Email and call snippets

  • Email opener: "Noticed your push into [use case]. Teams tell me their GPUs are waiting on the network more than models. Worth a 15-minute review of where congestion is costing you?"
  • Call hook: "If your retrain windows slip, your release calendar slips. We can buy back that time with the fabric changes customers used for their AI rollout."

What to watch next

  • Backlog conversion speed: Faster shipment cycles mean tighter deal timelines-prepare capacity planning early.
  • Hyperscaler capex guides: More GPU spend usually pairs with higher network upgrades.
  • Supply chain ripple effects: Lead times on optics, switches, and NICs can affect implementation dates. Set expectations in your SOW.

Helpful links

Bottom line: Buyers are funding AI workloads, and the network is the limiter. Lead with performance outcomes tied to model velocity, not just speeds and feeds. Help them spend once, prove it fast, and scale cleanly.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide