Former Intel CEO Pat Gelsinger warns AI spending mirrors dot-com bubble, sees efficiency gains by decade's end

AI spend echoes the dot-com surge: innovation is real, demand isn't. Product teams should pace bets, ship small, and steer with stage-gates and unit economics.

Categorized in: AI News Product Development
Published on: Oct 13, 2025
Former Intel CEO Pat Gelsinger warns AI spending mirrors dot-com bubble, sees efficiency gains by decade's end

AI Spending Looks Like Dot-Com: What Product Teams Should Do Now

Former Intel CEO Pat Gelsinger warns that today's AI spending surge mirrors the early internet boom. Massive innovation, yes. But assuming unlimited, linear growth is how budgets and product bets go off the rails.

He's still bullish long term: the next 2-4 years will look similar, then efficiency inflection points hit by decade's end. That means product leaders should build practical AI wins now, while pacing big bets. For context on the historic parallel, see the dot-com bubble overview.

The signal for product leaders

  • Assume innovation compounding, not demand compounding. Adoption lags hype.
  • Price in volatility for models, vendors, and unit economics. Avoid multi-year lock-ins without exit clauses.
  • Optimize for learning velocity over spend velocity. Ship small, measure, iterate.
  • Pace infrastructure with real usage, not slideware forecasts.

Budget guardrails that prevent regret spend

  • Stage-gate every program with explicit exit criteria: problem fit, proxy KPI lift, cost per inference, latency SLO, security review.
  • Model full TCO: $/inference at 50-70% utilization, $/1k tokens, QPS per GPU, memory bandwidth constraints, engineering cost to maintain.
  • Set hard caps by stage: Explore (< 50k), Validate (50-250k), Scale (250k-2M), with kill thresholds.
  • Cloud first for experiments; hold off on on-prem until stable 80%+ utilization forecasts and 18-24 month payback.
  • Track energy as a first-class constraint: kWh per million tokens and data center limits.
  • Introduce AI FinOps: monthly variance to plan, idle GPU alerts, and per-team cost dashboards.

Portfolio: no-regret moves vs. targeted bets

  • No-regret
    • Data quality pipeline and retrieval foundations (RAG, metadata, retention policies).
    • Evaluation harness: golden sets, hallucination checks, bias tests, red-team prompts.
    • Observability: prompt/version tracking, feedback loops, drift alerts.
    • Security and privacy baselines: PII handling, model abuse prevention, approval workflows.
  • Targeted bets
    • High-value agentic workflows with clear latency and liability bounds.
    • Domain-specific fine-tunes where data advantage exists.
    • Multimodal use cases only where input richness ties to measurable KPI lift.

Metrics that keep hype honest

  • Unit economics: $/1k tokens, $/inference at target latency, and margin impact per feature.
  • Experience: 95th percentile latency, task success rate, human-in-the-loop acceptance rate.
  • Model efficiency: tokens per task, cache hit rate, prompt token reduction over time.
  • Adoption: weekly active users, retained use after four weeks, task time saved.

What the timeline implies

Next 2-4 years: similar patterns and costs, with incremental efficiency gains. Don't bet the roadmap on breakthroughs that aren't here yet.

By decade's end: lower inference costs and better power efficiency make scaled deployment more practical. Design architectures that can swap models and hardware when that arrives.

90-day plan to de-risk spend

  • Audit all AI commitments: objectives, owners, costs to date, and next milestone dates.
  • Kill or pause anything without a clear KPI and evaluation plan.
  • Stand up an eval and observability stack shared across teams.
  • Pick three workflows with clear savings or revenue impact; build thin slices to production with safety rails.
  • Negotiate flexible vendor terms: usage-based pricing, model-swap rights, data deletion guarantees.
  • Publish a quarterly AI scorecard to leadership: spend, outcomes, learnings, next gates.

Build, buy, or partner: a simple decision frame

  • Build: proprietary data advantage, latency/IP needs, and sustained volume to justify TCO.
  • Buy: commodity capability, speed matters, and switching costs are manageable.
  • Partner: regulated workflows, co-development upside, or ecosystem distribution benefits.

Red flags that signal bubble behavior

  • GPU procurement outpaces shipped value.
  • Vanity benchmarks instead of real task metrics.
  • Vendor lock-in without exit plans or data portability.
  • "AI first" features with unclear user problem or P&L impact.

The takeaway: don't assume straight-line growth. Treat AI like any platform shift-focus on unit economics, staged learning, and flexible architecture. That positions your team to scale when efficiency curves bend.

For a balanced macro view on adoption and productivity trends, see Goldman Sachs Research's perspective on generative AI and growth potential here.

Skill up your product org

If you're building a capability map and training plan, explore role-based AI courses here and the latest hands-on programs here.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)