Executives are doubling down on AI-driven by risk of falling behind, not ROI
AI spend is set to rise again in 2026. Accenture's latest Pulse of Change survey shows 86% of executives plan to increase AI investment, and nearly half would keep investing even if an AI "bubble" burst.
Only 12% cite ROI as the primary reason. The survey spans 3,650 C-suite leaders and 3,350 workers from companies with $500M+ in annual revenue across 20 industries and 20 countries-so the signal is strong.
Strategy over spreadsheets
Leaders say AI is a strategic necessity. Forty-six percent would hold or grow spend even during a market correction, signaling conviction beyond quarterly math.
At the same time, 78% now view AI as a revenue growth lever more than a cost play, up from 65% in mid-2024. The issue: few teams can explain how they'll measure that growth in the near term.
The workforce gap is the bottleneck
Only 40% of employees say training prepared them for AI's impact on their roles. Just 20% feel like active co-creators in how AI changes their work.
Regular use of AI agents by employees dropped 10 points since summer 2025. Only 32% of leaders report sustained, enterprise-level impact, while 54% of employees say low-quality or misleading outputs are wasting time.
Process changes without role changes won't scale
About a fifth of companies are redesigning processes for AI, but fewer than 10% are redesigning roles. That's a mismatch. Process changes die on the vine if roles, skills, and incentives don't shift with them.
Employees are waiting for direction: only 18% strongly agree leadership has clearly communicated the 2026 change plan. Only 20% say they understand how AI agents and agentic systems will affect roles and required skills.
Why this matters for HR and strategy
Twenty-three percent of C-suite leaders called out access to skilled talent and training as critical for AI scale. Yet with ROI under-defined, the people side risks underfunding.
Meanwhile, 35% of leaders say the biggest unlock is the right data strategy and core digital capabilities. Employee confidence in their organization's ability to respond to tech disruption sits at 38% and trending down.
What to do next (practical, repeatable, defensible)
- Define the value thesis by domain: revenue, risk, and productivity. Tie every AI use case to one primary outcome with a 6-12 month target.
- Publish a role taxonomy for AI: which roles change, how, and when. Redesign responsibilities and incentives, not just workflows.
- Stand up a skills operating system: skill baselines, proficiency rubrics, and time-bound upskilling plans that map to role changes.
- Instrument measurement early: lightweight baselines before pilots; control groups where possible; simple A/B for agent workflows.
- Quality guardrails: human-in-the-loop criteria, evaluation datasets, error budgets, and fallbacks for critical tasks. See the NIST AI RMF for guidance (NIST AI RMF).
- Agent adoption playbook: onboarding checklists, job aids, shadowing, and weekly office hours until usage stabilizes above target.
- Change communication cadence: monthly progress notes, role-level FAQs, and live forums. If direction isn't clear, usage will stall.
- Funding model: commit a fixed percentage of AI program spend to role redesign, training, and enablement. No people budget, no scale.
A simple scorecard HR can own
- Adoption: percent of target roles using AI weekly; time-to-first-success metric by role.
- Productivity: cycle time, throughput, and rework rates pre/post deployment.
- Quality: factuality error rate, escalation rate, and customer-impact incidents.
- Capability supply: percent of critical roles at target skill level; internal fill rate for AI-augmented roles.
- Reskilling throughput: learners to proficiency per quarter; training completion-to-usage conversion.
- Financial linkage: contribution to pipeline/revenue, margin lift, or cost to serve-tied to the use-case value thesis.
Publish this scorecard at the portfolio level and by business unit. Keep it stable for two quarters to build trust, then refine.
Strengthen data and platform foundations
- Data readiness: governed access to high-signal datasets, feedback loops from agents back to training sets, and clear retention policies.
- Reusable components: model gateways, prompt/agent pattern libraries, evaluation harnesses, and audit trails.
- Portfolio discipline: a single intake and stage-gate to kill weak use cases early and scale proven ones.
If you need structured upskilling paths
Role-based learning beats generic training. For curated tracks by job family and skill, explore Courses by Job at Complete AI Training.
The bottom line
Confidence is high, clarity is not. Spending will rise, but impact will lag until leaders fund role redesign, install practical measurement, and fix quality at the source.
Treat AI like any other strategic bet: define value, change the work, prove it with data, and communicate on a schedule. Do that, and the gap between conviction and results closes fast.
For broader market context on enterprise AI adoption patterns, see McKinsey's State of AI research (McKinsey State of AI).
Your membership also unlocks: