DORA 2025: AI Boosts Output, Magnifies Dysfunction as Trust Lags and Instability Rises
AI amplifies software teams: high performers soar, weak systems stumble. It boosts throughput, but trust and stability lag-invest in platforms, policies, and VSM for lasting gains.

AI in Software Development: Amplifier, Not Fix
Nearly 90% of technology professionals now use AI at work. Yet the 2025 DORA State of AI-assisted Software Development report shows a clear gap: teams use AI, but trust it less than they rely on it.
The core finding cuts through the noise. "AI's primary role in software development is that of an amplifier." High-performing organisations get stronger. Struggling ones expose more of their dysfunction. Tools don't transform systems-systems do.
The Trust Gap Is Real
Developers are sceptical about accuracy. The 2025 Stack Overflow Developer Survey reports 46% distrust, 33% trust, and only 3% "high trust" in AI output. Heavy usage doesn't equal confidence.
This mismatch shows up every day: AI is fast, but teams aren't convinced it's consistently correct or safe without strong checks.
Speed Went Up. Stability Didn't.
DORA finds a positive link between AI adoption and delivery throughput. But instability is rising. Teams moved faster, while systems and processes didn't keep pace.
The research also tested whether "fail fast, fix fast" would offset the instability. It didn't. More speed alone didn't rescue quality or reliability.
Output Pressure Without Relief
As one leader noted, "faster doesn't always mean better." AI helps people push out more work. But burnout, broken processes, and clunky cultures don't vanish. In some teams, pressure ramps up: more output expected, same resources, same stress.
System Over Tools: The AI Capabilities Model
DORA introduces an AI Capabilities Model built on seven organisational practices that amplify AI's benefits. It focuses on team- and organisation-level systems, not individual tools.
Clear AI policies, healthy data ecosystems, and quality internal platforms are non-negotiable. Organisations with a strong user-centric focus see amplified gains; those without it often see negative side effects.
Platform Engineering Is Now Table Stakes
90% of organisations report using internal platforms, and 76% have dedicated platform teams. High-quality platforms enable guardrails, shared capabilities, and scale for AI-assisted development.
One nuance: high-quality platforms correlate with slight delivery instability increases-interpreted as risk compensation. Teams with strong recovery capabilities experiment more while keeping overall reliability.
Seven Team Profiles: Know Your Starting Point
The report defines seven profiles across performance, stability, and well-being. As explained by DORA lead Nathen Harvey, these range from "harmonious high-achievers" to teams facing "foundational challenges."
Use the profiles to target investment: improve system capabilities first, then scale AI usage.
Value Stream Management Magnifies Returns
The research dismisses the "AI is a tools problem" belief. It's a systems problem that requires organisational change. Mature value stream management (VSM) turns local AI gains into organisation-wide outcomes.
The pattern mirrors cloud adoption: teams that rethought architecture, teams, and ops unlocked value; lift-and-shift alone delivered little.
Developers Still See Personal Wins
Despite the trust gap, more than 80% report higher productivity with AI. 59% see improved code quality. The top use case is writing new code, with 71% of code writers using AI assistance.
These benefits are real-but they compound only when platforms, policies, and practices are ready.
What To Do Next: A Practical Checklist
- Define AI guardrails: data use, privacy, and IP policies; approval workflows for models, plugins, and external APIs.
- Measure what matters: DORA metrics plus stability (change failure rate, incident minutes) and developer well-being signals.
- Platform first: paved roads, secure-by-default templates, golden paths, and shared evaluations for AI-generated code.
- Raise quality gates: stricter reviews for AI-assisted changes, pairing, linters, SAST/DAST, SBOMs, and dependency hygiene.
- Data discipline: clean, documented internal datasets and prompt libraries; manage secrets; log prompts/outputs for auditability.
- Human-in-the-loop by design: mandatory verification for critical paths; tests before merge; no blind copy-paste.
- VSM practices: map value streams, remove bottlenecks, and tie AI use to measurable stage outcomes.
- Experiment safely: feature flags, canary releases, off-by-default AI features, chaos drills to validate recovery.
- Upskill teams: teach prompt patterns, failure modes, and review tactics for AI code. Consider structured paths such as AI certification for coding.
- Protect people: don't inflate output quotas without resources; fix process debt; monitor burnout and act on it.
The Bottom Line
Healthy teams climb higher with AI. Shaky ones fall faster. Treat AI adoption as a comprehensive transformation: invest in platforms, data ecosystems, engineering discipline, and VSM to capture durable gains.
For deeper detail, see the full DORA research (142 pages) on the DORA research site and the latest Stack Overflow Developer Survey.