Code Review, Not Code Writing, Now Dominates Engineering Work
Eighty-one percent of engineering leaders say the time saved by AI coding tools is now spent auditing AI-generated code, according to The State of Engineering Excellence 2026 report released Wednesday. The shift reveals a fundamental change in how developers spend their days: less writing, more reviewing.
The report surveyed 700 developers and engineering professionals at enterprise companies. It found that nearly a third of a developer's day goes to reviewing AI output-work that doesn't show up in traditional productivity metrics.
The Measurement Problem
Organizations adopted AI coding tools expecting faster delivery. They got it. Cycle times shortened, and developers report feeling more productive. But the gains came with hidden costs.
The overhead of reviewing AI work is real, yet invisible. Traditional dashboards measure code output and velocity, not the time spent validating what AI produced. Most enterprises lack tools to track whether productivity actually improved or simply shifted.
Many tech leaders built their measurement frameworks a decade ago, before AI entered the picture. Those frameworks don't capture the new unit of work: code review and quality assurance on AI-generated output.
Pressure Mounts on Developers
Engineering responsibilities have expanded. Developers now scrutinize code quality and security, make judgment calls about when to trust AI, and take accountability for downstream outcomes. Meanwhile, more than two-thirds of developers report increased pressure to deliver faster.
Enterprises are raising expectations without always updating how they measure success. That gap creates friction.
What Leaders Should Do
Tech leaders can start by auditing what their current measurement frameworks capture versus what AI adoption creates. Three concrete steps help:
- Track code delivery rate and quality metrics specific to AI-assisted workflows
- Measure time spent reviewing AI output as a distinct category of work
- Build governance and security review processes that match the new reality
Developers want clarity on how they'll be evaluated. More than half of survey takers said they fear performance assessments based on AI metrics and want a clear separation between improvement data and personal evaluations.
Working with developers to define the guardrails and systems by which they'll be measured reduces friction and builds trust in the new workflow.
AI Changes the Job Itself
Previous technology advances-cloud infrastructure, internet platforms-operated beneath developer roles. AI is different. It directly changes what developers do and how to measure their work.
For developers adapting to these shifts, AI for Software Developers covers the practical skills needed to manage AI-driven workflows and code review. AI Coding Courses also address how to work effectively with AI-generated code.
Your membership also unlocks: