AI Is Speeding Up Work-and Quietly Eroding Skills. HR Needs a Plan.
Employers are pushing AI into more workflows to lift output. A philosophy professor, Anastasia Berg of UC Irvine, warns the trade-off is real: overreliance on AI is stripping workers of foundational capabilities and hurting productivity now and later.
Berg cites research and industry anecdotes that show a clear pattern: when people offload routine thinking to software, core skills atrophy. Skills are kept by practice, not delegation. Without friction, the muscle of judgment fades.
The argument in plain terms
AI helps with tasks, but it also short-circuits the learning that builds competence. The early-career cohort is most exposed. If juniors skip the grind of debugging, drafting, and deciding, they don't build the gut-check needed to know when an AI answer is wrong.
This dependency is spilling into life outside work. Many adults now use chatbots for emotional support and small decisions-what Berg calls "constant advice" and "emotional task management." Offload enough choices, and independent reasoning gets dull.
Why this is an HR problem
Skill erosion looks like speed at first, then shows up as rework, brittle teams, and stalled promotions. You may see fast outputs with weak judgment, a growing gap between seniors and juniors, and more reliance on a handful of "fixers."
The risk compounds: If the tool changes or fails, performance craters. Bench strength shrinks. Hiring costs rise because you're forced to buy skills instead of growing them.
Early-career warning signs
Berg notes computer science professors and engineering managers are seeing junior developers rely on AI to write and fix code without grasping the underlying concepts. "It's one thing for a senior coder to use AI," she said. "But the junior people are useless because they cannot help themselves from using it."
Translate that beyond engineering: new hires draft emails, briefs, reports, and analyses with AI from day one. They can produce, but can't judge. That's a problem you can't see until it's expensive.
Practical moves for HR to protect capability while using AI
- Define AI-eligible tasks by seniority. Let seniors use AI for speed on known patterns. Require juniors to work problems manually first, then compare with AI.
- Write "manual-first" standards for learning roles. For interns and associates, set quotas for non-AI reps per week (e.g., 3 reports, 2 code reviews, 2 client emails) with human feedback.
- Run no-AI drills. Monthly sprints where teams complete core tasks without AI. Compare accuracy, time, and reasoning quality against normal weeks.
- Require evidence of reasoning. For key deliverables, ask for a short "why it's correct" note and sources. If AI was used, require verification steps and what changed after review.
- Pair work and mentorship. Juniors draft, seniors review. Seniors explain trade-offs; juniors rewrite based on feedback. Keep the loop tight.
- Progressive permissions. Gate advanced AI features behind proficiency checks. Earn access by passing no-AI assessments.
- Log tool use ethically. Track where AI is used, for what tasks, and the rework rate. Use aggregate data for coaching, not punishment.
- Codify guardrails. Define what must never be delegated to AI: policy, legal, compensation changes, performance ratings, and sensitive employee communications.
- Build capability matrices. List the human skills required per role (problem framing, estimation, debugging, stakeholder comms). Assess quarterly without AI.
- Friction-friendly learning design. Keep some tasks hard by design-timed case work, red-team reviews, and postmortems that focus on decision quality, not just output speed.
Metrics that catch skill atrophy early
- Time to independent performance for new hires (tracked by task type)
- No-AI assessment scores by team and level
- Defect density and rework hours on AI-assisted work
- Escalation rate to seniors for routine issues
- Quality scores from cross-functional stakeholders (clarity, accuracy, decision rationale)
- Promotion-readiness vs. tenure gaps
Hiring and promotion signals
- Interview for first-principles thinking. Ask candidates to solve a problem live, without tools. Score clarity of assumptions, not just the answer.
- Scenario tests. "The AI output looks plausible but contradicts your data. What do you do?" Look for verification steps and decision criteria.
- Portfolio with annotations. Show what was AI-generated, what was changed, and why. Reward discernment, not volume.
Policy starter checklist
- Permitted uses and banned uses by role
- Verification steps for any AI-assisted deliverable
- Disclosure requirement: mark where AI contributed
- Data safety rules: no sensitive inputs, approved tools only
- Training cadence: no-AI drills, feedback sessions, and refreshers
- Audit rhythm: monthly sampling, quarterly capability reviews
Training without dependency
Blend AI literacy with deliberate practice that keeps skills alive. Teach workers how AI works, where it fails, and how to review it. Then force reps without it.
If you need structured programs for different roles, browse curated options here: Courses by Job and Latest AI Courses.
Bottom line
AI can speed output, but it can also hollow out the very skills that make your people valuable. Berg's warning is simple: skills decay when we stop using them.
Your job is to keep the muscle. Set guardrails, keep friction where it teaches, measure what matters, and grow judgment on purpose. Speed is useful. Competence is non-negotiable.
Your membership also unlocks: