AI ambition is high. Workforce readiness isn't
Executives overwhelmingly see AI as a competitive edge, yet only a fraction are turning pilots into repeatable business value. New research by Economist Impact shows 88% view AI as an advantage, but just 4% have achieved scalable impact. Nearly all firms say they're "building AI skills," but most employees aren't being trained in any meaningful way.
The gap isn't technical. It's strategic. Companies are chasing quick productivity wins while underinvesting in skills, governance, and leadership accountability-the real levers that decide whether AI sticks.
The productivity trap
Most leaders (73%) back AI to drive employee productivity. And 79% measure ROI primarily through productivity metrics. In Tokyo, that jumps to 88%-the highest among the cities studied.
This mindset delivers short-term gains but limits long-term value. Few leaders are tracking outcomes like employee engagement, capability growth, or retention-signals that AI is scaling beyond isolated use cases.
As one researcher put it, many are still laying tracks while the train is moving. Productivity is a start. Strategy, skills, and governance turn it into advantage.
AI maturity: progress without scale
More than two-thirds say they've moved past experimentation, yet scalable results remain rare: New York leads at 6%, Tokyo and London at 5%, Singapore at 4%, and Sydney at 0%.
Weak accountability and governance are major blockers. While every firm has discussed responsible AI, only 8% have comprehensive, actively enforced frameworks-dropping to just 2% in small firms. Enforcement is strongest in Tokyo (11%) and New York (10%), followed by London (8%), Singapore (5%), and Sydney (4%).
The biggest risks aren't exotic model failures; they're internal missteps-poor data handling, weak oversight, and careless use of sensitive information.
The skill gaps that create risk
- Cybersecurity: 96% say it's essential; only 20% believe their teams are proficient (76-point gap).
- Data privacy: large shortfall between importance and proficiency (68-point gap).
- Bias detection: similarly wide gap (71 points).
These gaps raise operational and reputational risk as AI moves into core processes. Without targeted upskilling, guardrails won't be enforced consistently.
Training talk vs. investment reality
Leaders say AI talent is strategic (88%) and believe leadership is aligned (60%). But only 38% have sufficient, dedicated budgets for AI training.
Almost all firms (99%) have "some approach" to developing AI skills, but most rely on informal methods: mentorship (54%) and self-directed online learning (52%). Structured internal programs (16%) and external partnerships (21%) are limited. Where training exists, nearly half say it reaches less than 10% of employees-confining AI capability to small specialist teams.
London is an outlier. Over half (53%) cite university partnerships as a top strategy, and 41% say they're scaling AI initiatives across multiple business units.
Soft skills and weak ownership stall adoption
Executives rate critical thinking and creativity as highly as technical skills (both 95%), yet only around a third say employees excel in them. Without these, teams struggle to question AI outputs, adapt, and innovate.
Ownership is also diffused. Nearly half say managers have minimal responsibility for AI skills, and 8% say there's none. Resistance from employees and middle management frequently blocks execution.
What executives should do next
- Move beyond "productivity" as the ROI: Set three measurable outcomes per use case across revenue, risk, and quality (e.g., cycle-time reduction, error-rate drop, customer NPS lift, compliance incidents avoided).
- Fund learning like you fund software: Ringfence a meaningful slice of each AI project for training. Target broad coverage-especially for impacted roles-not just specialists. Blend formal programs with on-the-job application.
- Make governance enforceable: Assign a single accountable owner. Stand up model registries, data handling standards, audit trails, incident response, and red-teaming. Align to the NIST AI Risk Management Framework to accelerate.
- Close priority skill gaps first: Cybersecurity, privacy, and bias detection are non-negotiable. Use scenario-based drills and real data-not just slideware.
- Hold managers responsible: Tie AI capability goals and adoption metrics to performance and incentives. No accountability, no change.
- Invest in soft skills: Train teams to question outputs, validate sources, and make decisions with AI as a copilot. Codify human-in-the-loop checkpoints.
- Scale by design: Move from one-off pilots to reusable playbooks. Stand up cross-functional squads, shared components, and a go/no-go checklist before scaling across units.
Quarterly metrics that keep you honest
- % of AI projects that pass governance gates before launch
- % of impacted roles trained vs. total impacted headcount
- Adoption and sustained usage by frontline teams
- Outcome deltas: revenue lift, cost per transaction, error rate, time-to-resolution
- Number and severity of AI-related incidents
- Time from pilot to scaled deployment; reuse rate of components/playbooks
Bottom line
Short-term productivity plays feel good, then stall. Sustainable advantage comes from building capability-skills, governance, and clear ownership-so AI scales beyond experiments and survives leadership changes and market cycles.
As one industry leader noted, prioritising quick wins over skills leaves value on the table. Close the capability gaps, and AI stops being a presentation slide and starts compounding across the business.
Go deeper
- Economist Impact publishes research on AI adoption, governance, and workforce readiness.
- AI for Executives & Strategy covers strategy, operating models, and governance patterns to translate AI ambition into business value.
- AI for Human Resources explores practical L&D and talent strategies to build AI skills at scale.
Your membership also unlocks: