AI can help or hurt software development - old best practices still decide the outcome
Generative AI can write a lot of code, but its impact on productivity is uneven. Data from Atlassian's DX, Google's DORA, and LaunchDarkly shows massive variation between teams. The teams that win aren't doing anything new. They're doubling down on fundamentals they should have been doing all along.
What the data actually shows
DX presented anonymized metrics from 135,000+ developers across 400 customers. A DORA report earlier this year found a 2.6% average improvement in code quality from GenAI usage, with just a 0.11% change in change failure rate. Not impressive on average - but averages hide the truth.
Across companies, change confidence varied widely: some teams saw 20%+ improvements, others took a hit. Maintainability showed the same pattern - a small average gain (~2%), with big swings up and down depending on the team's practices. DORA's research is worth a look if you track engineering performance.
LaunchDarkly reported that 94% of respondents said AI accelerates coding, but 91% don't trust AI-written code in production. 81% admitted shipping with known risks due to delivery pressure - which then shows up as incidents. Source: LaunchDarkly research.
Where AI helps right now
- Top use case: stack trace analysis (DX).
- Code completion: saves ~3.8 hours per week per developer. Bigger than it sounds because writing code is a fraction of the job.
- Pull requests: AI users ship ~60% more PRs. Useful if your review gates catch junk; harmful if they don't.
- Seniority effect: juniors use AI more (41.3% daily) than staff engineers (32.7%), but seniors save more time thanks to better judgment.
- Language effect: Go developers save ~4 hours/week; COBOL ~2. "Modern" ecosystems get more lift.
Pain points AI doesn't fix
- Meeting-heavy days and constant interruptions.
- Slow builds and flaky pipelines.
- Long PR review queues and unclear ownership.
- Context switching that kills flow.
These bottlenecks dwarf the time you save with code generation. Fix them first, or AI just helps you produce blocked work faster.
Why some teams improve and others regress
Teams that invest in the SDLC - clean code practices, testing, review hygiene, fast feedback, and clear standards - see gains. Teams without those guardrails see quality dip and rework spike. There's also a J curve: early adoption can hurt before it helps.
AI is also pulling more non-developers into shipping code (prototypes, scripts). That's good for empathy and momentum, but it demands stronger guardrails, review policies, and environments that protect production.
The biggest predictor of sustained productivity still hasn't changed: psychological safety. Teams that can question decisions, flag risks, and learn in public move faster - especially with AI in the mix.
Practical actions for engineering leaders
- Codify code hygiene: enforce PR templates, linters, required tests, and clear acceptance criteria.
- Speed up feedback: invest in build caching, parallel CI, and trunk-based workflows with feature flags.
- Define AI usage: where it's allowed, where it's not, and what review steps are mandatory.
- Measure outcomes, not vibes: track change confidence, review time, CFR, and maintainability trends.
- Level up code reading: teach developers to validate AI output; pair seniors with juniors on AI-assisted work.
- Guard PR quality: auto-check security, licensing, tests, and architectural boundaries before human review.
- Protect flow: schedule interruption-free blocks; limit ad-hoc work; reduce PR handoffs.
- Plan for the J curve: start small, establish quality gates, expand usage as metrics improve.
- Welcome contributions outside core devs: sandbox environments, clear deploy lanes, and non-prod defaults.
- Build psychological safety: blameless postmortems, transparent standards, and permission to slow down for quality.
Rethinking "who is a developer"
Designers, PMs, and managers can now ship working prototypes with AI. Treat that as a feature, not a threat. Provide safe lanes, clear review policies, and simple scaffolding so their contributions help, not hinder.
Bottom line
AI accelerates what you already are. If your SDLC is solid and your team feels safe to flag issues, you'll see real gains. If not, AI just helps you create more rework, faster. Focus on flow, feedback loops, and team trust - then apply AI with intent.
If you're formalizing AI skills for your team, see this curated path: AI Certification for Coding.
Your membership also unlocks: