Air Force DASH 2 Proves AI-Human Teaming Delivers Faster, More Accurate Battle Management
Air Force's DASH 2 shows AI delivering sub-10s recommendations and 30x more options while keeping humans in charge. Faster parallel kill chains improved outcomes.

Air Force's DASH 2 shows how AI boosts battle management speed and decision quality
The Air Force completed the second Decision Advantage Sprint for Human-Machine Teaming (DASH 2), an experiment focused on using AI to assist operators in fast, high-stakes command and control decisions. The result: faster recommendations, more options on the table, and human judgment kept in the loop.
Key outcomes you can benchmark
- Cycle time: AI-generated recommendations in under 10 seconds.
- Option space: 30x more options than human-only teams; two vendors produced 6,000+ solutions for ~20 problems in one hour.
- Quality: Accuracy on par with human performance after just two weeks of development; one algorithm tweak would have raised validity from ~70% to 90%+.
- Parallel execution: More viable options enable multiple "kill chains" (think parallel workstreams) to be executed at once.
How the sprint worked
Seven teams-six industry and one from the Shadow Operations Center-Nellis-built AI-enabled microservices to assist with the "match effectors" function: selecting the best available system to engage a target. Teams observed crews operating without machine support, then iterated tools with operators in the loop. Final demos compared humans alone vs. human-machine teams on speed, quantity, and quality of decisions.
Evaluation centered on improved decision outcomes, not just faster data processing. Industry partners kept their IP while the Air Force captured integration needs for future C2 software. The 711th Human Performance Wing measured operator performance, workload, and teaming dynamics, confirming that AI can accelerate decisions while keeping humans responsible for judgment.
Why this matters to operations leaders
- Throughput without blind spots: Expand your option set while maintaining human oversight.
- Decision quality under pressure: Sub-10-second recommendations help teams act within short windows.
- Operator-centered AI: Tools are built with crews, not for them-improving trust, usability, and adoption.
- Clear governance: Co-development and IP clarity speed integration while reducing rework.
Adoption playbook you can copy
- Pick one decision function: Start with a high-impact, repeatable choice (e.g., task-to-asset matching, incident routing, inventory allocation).
- Instrument the baseline: Measure speed, option count, and decision quality before adding AI.
- Build microservices: Small AI services that slot into current workflows beat monoliths.
- Iterate with operators: Observe real work, prototype, test, improve-inside a sprint cadence.
- Score every recommendation: Include risk, opportunity gain/loss, and resource impact with each option.
- Keep humans accountable: Define decision rights, override rules, and escalation paths.
- Prove it in realistic drills: Test under stress with live data, tight timing, and mixed teams.
Governance that scales
- Model validity: Track accuracy by scenario; require fail-states and confidence estimates.
- Human performance: Monitor workload, trust calibration, and error recovery.
- Data lineage: Log inputs, versions, and rationale for post-action reviews.
- Risk management: Align with recognized frameworks such as the NIST AI RMF.
Where this is heading
DASH is part of a broader push to modernize command and control and strengthen joint decision advantage, aligned with the Pentagon's Combined Joint All-Domain Command and Control initiative (CJADC2). Expect future sprints to generate full courses of action with embedded risk, opportunity, and resource trade-offs-so teams can choose and execute faster with confidence.
Level up your team's AI capability
If you're building human-machine workflows in your organization, explore role-based learning paths and certifications to speed adoption: AI courses by job role and popular AI certifications.