AI Everywhere, Productivity Nowhere

AI is everywhere in the talk, but barely in the numbers. The lift comes when firms wire it into real workflows, measure hard, and scale only what beats the control.

Published on: Feb 18, 2026
AI Everywhere, Productivity Nowhere

AI's Solow Moment: Why Productivity Isn't Spiking-Yet

In 1987, Robert Solow pointed out a harsh truth: new tech was everywhere except in the productivity numbers. We're there again with AI. Companies talk about it. Investors fund it. But the macro data isn't moving the way the headlines suggest.

Across the S&P 500, hundreds of firms now mention AI on earnings calls. Yet a new National Bureau of Economic Research survey of 6,000 executives found most see little operational impact so far. Two-thirds say they use AI, but the typical usage is just 1.5 hours per week. Nearly 90% report no effect on productivity or employment over the last three years, even as they expect a 1.4% productivity lift over the next three.

Some studies show pockets of upside. Others show modest gains. The Federal Reserve Bank of St. Louis noted a 1.9% bump in excess cumulative productivity growth since late 2022, while a 2024 MIT paper projects a 0.5% productivity increase over the next decade-useful, but far from the hype. Meanwhile, outside of a few tech giants, earnings and margins aren't reflecting an AI wave yet.

Workers feel the gap, too. ManpowerGroup reports regular AI use rose 13% in 2025, but confidence in the tech fell 18%. Even firms bullish on automation are rethinking talent pipelines: IBM's HR leadership, for example, emphasized hiring more young workers to avoid hollowing out future managers by over-automating entry-level roles.

What the 1990s Can Teach Us

IT disappointed before it delivered. After years of underwhelming returns, productivity surged from 1995 to 2005. We may be at a similar inflection point. Some economists point to early signs-GDP tracking higher even as job growth cools-hinting at a productivity jump as companies shift from pilots to scaled deployment.

Expect a lag. Process redesign, training, and data plumbing take time. Once those foundations settle, the lift can be fast.

Why Your AI ROI Is Flat

  • Shallow adoption: 1.5 hours per week won't move core KPIs.
  • No process redesign: plugging AI into broken workflows just speeds up waste.
  • Scattered tools: too many point solutions, no standard stack, zero reuse.
  • Data friction: messy inputs, weak retrieval, poor governance.
  • Skills and trust gap: users lack training; leaders overestimate readiness.
  • Missing measurement: no baselines, no clear target metrics, no control groups.
  • Change fatigue and compliance blockers: slow reviews, unclear guardrails.

A Practical Playbook for Executives and HR

  • Pick three workflows per function where time and quality truly matter (support tickets, RFPs, month-end close, QA checks). Kill vanity pilots.
  • Redesign the process end-to-end: who does what, with which prompts, what inputs, what human checks, and where the data lands. Then document it.
  • Standardize your stack: one model access layer, one prompt library, one analytics view. Reduce tool sprawl.
  • Set targets before build: "Cut handling time 30%," "Reduce defects 20%," "Increase win rate 3 pts." Establish baselines and a control group.
  • Stand up an AI PMO (small, sharp team). Name function-level "AI champions" accountable for adoption, training, and monthly KPI reviews.
  • Train for judgment, not just prompting. Pair junior staff with AI to accelerate learning instead of replacing the ladder they need.
  • Protect the pipeline: preserve entry-level roles that feed future managers. Blend automation with apprenticeships and rotational programs.
  • Data and governance: clean inputs, retrieval policies, PII rules, human-in-the-loop for high-risk steps, audit trails for every assisted output.
  • Communicate job impact clearly: redeploy time saved to higher-value work. Track it. If you can't show where hours went, they probably vanished.
  • Review quarterly: keep what beats the control, kill what doesn't, and scale the winners.

What to Measure This Quarter

  • Core ops: cycle time per task, cost per ticket/case, first-pass yield, rework rate, customer CSAT/NPS, win/close rates, SLA adherence.
  • Financials: throughput per FTE, revenue per employee, gross margin, unit economics.
  • Adoption: weekly active users, assisted tasks per user, minutes of AI use per week, prompt reuse rate.
  • Quality and risk: hallucination/defect incidents, review time per assisted output, compliance exceptions.

Run 4-week sprints: week 1 baseline, weeks 2-3 build and train, week 4 A/B test. Publish a one-page scoreboard. If it beats the control, scale. If not, pivot or shut it down.

Plan for the J-Curve

Expect an initial dip: time spent on training, playbooks, data cleanup, and governance. The lift comes after standardization, when reuse, shared prompts, and cleaner data compound the gains. Tool prices will keep dropping; your edge is operational.

The value isn't the model. It's how you wire it into workflows, skills, and decisions. That's where productivity shows up-quietly at first, then all at once.

Resources

The Bottom Line

AI will pay when it stops being a talking point and starts living inside your processes. Pick fewer bets, measure harder, and scale only what clears the bar. Do that, and the paradox fades into results you can see in your own P&L-before it shows up in the national stats.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)