AI Coding Tools Slow Down Experienced Developers Despite Widespread Hype and Industry Optimism

A study finds experienced developers take 19% longer using AI coding assistants like Cursor Pro. Despite this, 69% still prefer using AI for reduced cognitive effort.

Published on: Jul 12, 2025
AI Coding Tools Slow Down Experienced Developers Despite Widespread Hype and Industry Optimism

AI Coding Assistants Slow Down Experienced Developers, New Study Finds

Contrary to popular belief, experienced developers take about 19% longer to complete tasks when using leading AI coding assistants like Cursor Pro and Claude. This finding comes from a rigorous study by Model Evaluation & Threat Research (METR), which tracked 16 seasoned open-source developers working on large, mature repositories averaging over one million lines of code.

The study used a randomized controlled trial (RCT) to compare productivity with and without AI tools. Results showed that AI tools, rather than speeding up work, actually made developers slower on average.

The Reality vs. Perception Gap

Before the study, developers expected AI to cut their task time by 24%. Even after experiencing slower task completion, they still believed AI improved their productivity by 20%. This misperception extended beyond developers: economists anticipated a 39% boost, and machine learning experts forecasted 38%. All predictions significantly overestimated the actual impact.

Sanchit Vir Gogia, CEO of Greyhound Research, highlighted the risk organizations face by confusing developer satisfaction with productivity. While AI tools often reduce cognitive load and improve the coding experience, they don't necessarily speed up output—especially for developers with deep expertise.

Real-World Testing with Experienced Developers

The study’s methodology stands out for its realism. It involved 16 developers contributing to popular open-source projects averaging 22,000+ stars and over a million lines of code. Tasks were randomly assigned to either allow or disallow AI tool use, primarily Cursor Pro integrated with Claude 3.5 and 3.7 Sonnet. Each task took around two hours on average, with screen recordings capturing actual AI usage patterns.

Gogia called this research a critical correction to the assumption that AI coding tools automatically enhance productivity. He urged companies to adopt rigorous, structured evaluation models rather than relying solely on vendor claims or simplistic benchmarks.

Why Does AI Slow Experienced Developers?

Several factors contribute to this productivity paradox:

  • Developers often experimented with AI beyond productive use, despite instructions to use it selectively.
  • Participants averaged five years’ experience and 1,500 commits in their repositories, with greater slowdowns on tasks where they had strong prior knowledge.
  • Developers accepted fewer than 44% of AI-generated suggestions, with most reading every line of output and over half extensively modifying AI code before integrating it.
  • Large, complex codebases with strict standards and intricate dependencies proved difficult for AI to navigate effectively.

According to Gogia, the 19% slowdown reflects the friction of blending probabilistic AI suggestions into deterministic, expert workflows. True productivity measurement should consider rework, code churn, and peer reviews—not just coding speed.

Industry Data Echoes These Findings

The METR study aligns with trends from Google’s 2024 DevOps Research and Assessment (DORA) report, which surveyed over 39,000 professionals. While 75% of developers reported feeling more productive with AI, actual data showed a 1.5% decrease in delivery speed and a 7.2% decline in system stability for every 25% increase in AI adoption. Trust issues remain high, with 39% expressing little or no confidence in AI-generated code.

Earlier optimistic studies, such as those from MIT, Princeton, and the University of Pennsylvania, reported significant productivity gains using tools like GitHub Copilot—but these focused on simpler, isolated tasks rather than complex, real-world projects.

Despite billions invested in AI coding tools and reports that 41% of new code on GitHub is AI-generated, a fundamental trust gap persists. One developer compared relying on AI code to early days of StackOverflow—often copying code that later caused problems.

A Practical Approach Moving Forward

Despite slower task completion, 69% of developers chose to keep using Cursor after the study, indicating they value benefits beyond speed, such as reduced cognitive effort or improved workflow experience.

The METR research suggests AI coding tools aren’t doomed but need a balanced approach. Gogia recommends treating AI as a contextual co-pilot—effective for augmenting cognition in areas like documentation, boilerplate code, and test generation—but holding back in areas where deep expertise and repository familiarity matter most.

Enterprises should implement governance and measurement frameworks that go beyond vendor claims, focusing on when and where AI tools add real value.

For those interested in deepening their skills with AI tools in coding and development, exploring specialized AI courses can provide practical insights and strategies aligned with current industry realities.