AI coding assistants can slow skill growth: what the data says and how to respond
New research sponsored by Anthropic found developers using AI assistance scored 17% lower on skill tests than those coding manually. The gap showed up most in debugging-where you earn your stripes-and it raises clear questions for teams leaning on AI to ship faster.
The study tracked 52 engineers learning the Trio asynchronous Python library. Productivity gains didn't offset the cost: AI users finished only two minutes faster on average, a difference that wasn't statistically significant.
What the study measured
- Randomized controlled trial: one group with AI assistance, one coding by hand.
- Quiz scores after learning: AI users averaged 50% vs. 67% for manual coders.
- Time: negligible speed gain; some participants spent up to 11 minutes crafting prompts, erasing any benefit.
How you use AI matters
Six distinct usage patterns emerged. Three hurt learning (often badly). Three preserved it.
- Detrimental patterns (scores below 40%): complete AI delegation; progressive reliance (start human-led, then switch to AI-led); iterative AI debugging.
- Preserving patterns (65-86%): ask conceptual questions only; request explanations alongside generated code; generate code but follow up with questions to build understanding.
Bottom line: wholesale delegation is fast but hollow. Engagement beats outsourcing when the goal is skill growth.
Why debugging takes the hit
The manual group saw more errors during tasks and fixed them independently. That struggle built the muscle they needed to diagnose issues later. AI users saw fewer errors in the moment, but paid for it on debugging questions-less practice, weaker intuition.
That's a problem if you're validating AI-generated code. You still need the judgment to spot subtle failures and reason about side effects.
A practical playbook for engineers
- Use AI for ideas, not answers. Ask conceptual questions or request an explanation with code. Force your brain to engage.
- Write first, then compare. Draft your approach, then ask AI for an alternative. Diff the two and annotate what you learned.
- Timebox prompting. Cap AI querying to a few minutes. If you're still stuck, switch to docs or a human.
- Interrogate output. Ask: what are the failure modes? What are the invariants? Why this algorithm vs. another?
- Practice debugging on purpose. Reproduce failures, isolate minimal repros, and explain root causes in plain language.
- Keep a learning log. Each assist should end with a note: what changed in your mental model?
Team guidelines that balance speed and learning
- Define "assist modes." Concept-only, explain-with-code, or code-first-with-follow-ups. Ban complete delegation for juniors.
- Require explanations in PRs. If AI contributed, include a brief rationale and risks. Review diffs like an API contract, not a wall of code.
- Make debugging a weekly rep. Postmortems, failure hunts, and "read the stacktrace aloud" drills.
- Track real outcomes. Time saved, defect rates, rework, onboarding speed. Don't rely on vibe or isolated anecdotes.
- Pair deliberately. Senior-led sessions that narrate thought processes. Juniors verbalize hypotheses before using AI.
Limits to keep in mind
The study measured immediate comprehension after roughly an hour of learning. It didn't track long-term effects or on-the-job performance. A stronger test would follow cohorts over months to see how habits compound.
Still, the signal is clear enough for action: careless reliance trades short-term convenience for long-term capability.
Risk management and policy
- Guardrails, not bans. Allow AI, but set boundaries by task type and seniority.
- Validation-first culture. Treat AI output as a draft. Prove it works; don't assume it does.
- Skill preservation as a KPI. Debugging and code comprehension are strategic assets. Manage them like uptime.
- Assume mixed incentives. Vendors talk safety, but usage can be opaque (especially via API). Your policies must stand on their own.
Useful resources
- Trio documentation - the library used in the study; great reference for async patterns done right.
- AI Certification for Coding - structured approaches for using assistants without losing core engineering skills.
AI can help you ship. It can't think for you. Keep the hard parts of engineering in your hands, and use the tools to sharpen, not replace, your judgment.
Your membership also unlocks: