AI Optimism Meets an Expertise Vacuum: Why Critical Thinking Is the New Scarce Skill
A new global survey of 1,540 board members and C-suite executives shows a surprising split. Leaders are bullish on AI, yet they're staring at a deeper problem: the slow disappearance of the career paths that built strategic thinkers. AI isn't just exposing a technical gap-it's exposing a critical thinking gap that puts future oversight and performance at risk.
Talent ranks fifth among long-term risks heading into 2026 and will remain a front-burner issue through 2035. What's different now is scope. This isn't confined to IT. As Fran Maxwell of Protiviti put it, the skills shortfall is "more prevalent now than it has ever been," touching almost every role.
The new gap: thinking, not tasks
Mark Beasley of NC State's Poole College of Management drew a sharp line: past waves of tech were enhancers; AI replaces jobs outright. That shifts the bar from execution to cognition. "Knowledge is sort of now free in some ways. Thinking now has to really kick in," he said.
This is where the risk compounds. Entry-level "grunt work" has long been the training ground for judgment. AI is wiping out those reps. That's the path that created mid-level experts and, eventually, senior leaders.
The expertise pipeline is breaking
Julia Coronado, board director at Robert Half, framed the dilemma clearly: if AI takes the entry-level bench and you still need a strong middle, how do you grow it? Quality control, model oversight, and decision rights still require people with depth. But where will they come from?
Maxwell was blunt: organizations must redesign jobs and re-architect how they grow talent. HR will need new muscles for skills mapping, role design, and structured development-fast.
Interconnected risks raise the stakes
Cyber is the top near-term risk, closely followed by third-party exposure. Sameer Ansari of Protiviti linked these directly to AI: bias, model drift, weak access controls, and a shortage of people who understand how to operate these systems. When your vendors plug AI into their workflows, your risk compounds-often invisibly.
If you want a baseline for governance, the NIST AI Risk Management Framework is a practical starting point for controls and assurance. See NIST AI RMF
What leaders should do now
Here's a pragmatic playbook to close the thinking gap while you scale AI.
1) Redesign work to create thinking reps
- Split roles into "AI does it" vs. "humans decide it." Push execution to AI; keep judgment with people.
- Convert entry-level tasks into decision-training modules: case reviews, scenario analysis, and model QA.
- Build rotations across functions and tools so juniors see context, not just outputs.
2) Stand up an apprenticeship for critical thinking
- Pair early-career talent with seniors to review AI outputs, probe assumptions, and write decision memos.
- Use structured prompts and checklists to force reasoning: hypotheses, evidence, risk, alternative paths.
- Reward clear thinking in performance reviews-clarity of problem definition, decision quality, and learning speed.
3) Upskill the workforce you already have
- Run a skills inventory: what you have, what you need, and where to build vs. buy.
- Create role-based paths for AI fluency: operator, reviewer, product owner, control/assurance, and change leader.
- Leverage reverse mentoring: younger staff coach seniors on AI tools; seniors coach on judgment and context.
If you need structured paths by role or function, see these targeted learning tracks: AI courses by job and popular certifications.
4) Build AI governance that actually works
- Keep an inventory of AI use cases, models, data sources, and owners. Assign decision rights.
- Implement model QA: bias checks, drift monitoring, human-in-the-loop criteria, and outcome audits.
- Lock down third-party risk: attestations, monitoring, kill switches, and clear failure playbooks.
For threat-informed defenses, map controls to known attack techniques with MITRE ATLAS. Explore MITRE ATLAS
5) Create new ladders, not just new titles
- Define progressions that build judgment: junior reviewer → model steward → portfolio owner.
- Shift incentives from task volume to decision quality, customer outcomes, and model performance.
- Codify tacit knowledge. Turn "how we decide" into playbooks, not folklore.
6) Measure what matters
- Bench depth: percentage of roles with ready-now and ready-soon successors.
- Skills coverage: critical skills filled vs. forecast need per function.
- AI quality: error rates caught by humans, model drift incidents, time-to-detect and time-to-correct.
- Learning velocity: time to proficiency by role; uplift in decision quality after training.
Your 90-day plan
- Week 1-2: Build a cross-functional squad (HR, risk, tech, business) and agree on five high-impact use cases.
- Week 3-4: Redesign the related jobs. Document human decisions vs. AI tasks. Add QA and escalation paths.
- Week 5-8: Launch targeted training for those teams with real data and real decisions.
- Week 9-12: Turn lessons into a repeatable playbook and scale to two more functions.
The mindset shift
Economic uncertainty is now a constant. The bigger threat is standing still. As Beasley noted, stagnation is the real risk. Companies that act with clarity-on values, customer focus, and talent strategy-are the ones that keep momentum.
Maxwell said it plainly: "You can't solve today's talent problems with yesterday's talent." Redesign work. Build thinkers. Upskill at speed. The organizations that do this won't just use AI-they'll lead with it, responsibly and profitably.
Your membership also unlocks: