Human-Machine Teaming in Battle Management: What DASH 3 Proved and How to Apply It
The 2025 Decision Advantage Sprint for Human-Machine Teaming (DASH 3) brought U.S., Canadian, and U.K. operators together at the Shadow Operations Center-Nellis to pressure-test AI inside battle management workflows. The focus: decision speed, option quality, and coalition interoperability under real operational constraints.
Led by the Advanced Battle Management System Cross-Functional Team and executed with the Air Force Research Lab's 711th Human Performance Wing, U.S. Space Force, and the 805th Combat Training Squadron (ShOC-N), the event moved AI from theory to field-tested utility.
Why this matters for managers and ops leaders
Operations break down without timely, viable options. DASH 3 showed AI can compress planning time from minutes to seconds, while keeping humans in charge of final decisions. More options, generated faster, means higher agility under pressure.
What was tested
Seven teams-six from industry and one from ShOC-N-partnered with operators to generate multi-domain courses of action (COAs). These included long-range kill chains, electromagnetic battle management problems, space and cyber considerations, and agile combat employment like re-basing aircraft.
U.S. Air Force Col. John Ohlund explained the core value: "A bomber may be able to attack from multiple avenues of approach, each presenting unique risks and requiring different supporting assets such as cyber, ISR, refueling, and air defense suppression. Machines can generate multiple paths, supporting assets, compounding uncertainties, timing, and more."
Speed and quality gains
AI-generated recommendations arrived in under one minute. Compared to traditional processes, that's up to 90% faster, with the best machine outputs showing 97% viability and tactical validity.
Human teams averaged 19 minutes with 48% of COAs considered viable and tactically valid. As Ohlund put it, the value is speed plus quality-while keeping human judgment on the loop.
Trust moved from skepticism to confidence
"I was skeptical about technology being integrated into decision-making, given how difficult and nuanced battle COA building can be," said U.S. Air Force First Lt. Ashley Nguyen. "But working with the tools, I saw how user-friendly and timesaving they could be. The AI didn't replace us; it gave us a solid starting point to build from."
Nguyen added, "Some of the AI-generated outputs were about 80% solutions. They weren't perfect, but they were a good foundation." This is the practical sweet spot: AI drafts, humans decide.
Coalition interoperability by design
"We understand that the next conflict cannot be won alone without the help of machine teammates and supported by our allies," said Royal Canadian Air Force Capt. Dennis Williams. "DASH 3 demonstrated the value of these partnerships as we worked together in a coalition-led, simulated combat scenario."
U.S. Air Force Lt. Col. Shawn Finney highlighted the approach: keep classification barriers low enough to bring allies in early and often. That widened participation and accelerated shared learning across nations.
Risks addressed: weather and AI hallucinations
Weather wasn't fully integrated due to simulation limits, so teams "white carded" effects such as airfield closures and delays. "We fully understand its operational impact and are committed to integrating weather data into future decision-making models," said Ohlund.
On hallucinations, teams engineered safeguards and monitored outputs closely. None were observed during the experiment, but the risk remains-especially with LLMs and military-specific jargon-so teams are refining guardrails to keep outputs reliable.
Operational playbook: apply these lessons now
- Start with human-in-the-loop: let AI draft options, and keep humans accountable for final decisions.
- Measure what matters: baseline time-to-decision and viability rates, then compare against AI-assisted runs.
- Build trust deliberately: train operators on the tools, run reps in realistic scenarios, and review outcomes together.
- Design for interoperability: use shared data formats, common vocabularies, and unclassified pilots to include partners.
- Mitigate risk early: define no-go constraints, curate domain-specific terminology, and log/monitor AI outputs.
- Simulate real constraints: if full data (e.g., weather) isn't available, inject scenario cards so teams plan around real limits.
Related context
For broader doctrine on connecting sensors to shooters, see the DoD's approach to Joint All-Domain Command and Control (JADC2): defense.gov. For managing AI risk in operational systems, review NIST's AI Risk Management Framework: nist.gov.
Looking ahead
The 2026 DASH series will build on these gains-faster COA generation, tighter human-machine collaboration, and broader coalition participation. "By continuing to build trust with operators, improve AI systems, and foster international cooperation, the U.S. and its allies are taking critical steps toward meeting modern warfare challenges," said Ohlund.
Williams summed up the intent: "The more we can integrate AI into the decision-making process, the more time we can free up to focus on the human aspects of warfare."
Level up your team's AI fluency
If you lead operations and want your staff to speak the language of AI-supported decision-making, explore practical training paths by role: Complete AI Training - Courses by Job. Or scan the latest programs to upskill quickly: Latest AI Courses.
Your membership also unlocks: