How Army Doctrine Writers Use AI: Speed, clarity, and a strict check on facts
Inside the Army's doctrine shop at Fort Leavenworth, writers are using AI to turn complex research into clear, usable manuals. The mission: explain arctic combat, contested airspace, and deployment guidance at a level a high schooler could follow - without losing rigor.
Lt. Col. Scott McMahan put it plainly: "You can use it as essentially an editing tool to say, 'hey, I want this to be written at the 12th grade level,' and 'I want it to have a tone that is more approachable for the intended audience.'"
Doctrine is a shared language - and AI speeds the rewrite loop
Doctrine defines how the Army thinks and fights. Combined Arms Doctrine Directorate Director Richard Creed Jr. calls it a "foundational language" that lets commanders and units work across echelons and specialties.
The writers aren't expected to be the subject expert in every domain. Instead, they collect reports, test data, and lessons from current operations, then distill them into principles and TTPs that anyone can apply.
Tools and sources behind the scenes
Writers eased in with Camo GPT, then expanded to the Pentagon's GenAI.mil program with approved versions of Gemini and ChatGPT. They also pull raw numbers from the Army's data platform, Vantage, to ground ideas in real exercise results.
Creed emphasized their connection to current operations: "We're tightly enmeshed in the operational force⦠We're pulling in that information, and that information becomes part of the databases that the AI can mine for us more efficiently."
Explore the official Combined Arms Doctrine Directorate page: U.S. Army CADD. Learn more about the Army's data platform: Army Vantage.
How they actually write with AI
- Set constraints up front: reading level, tone, audience, length, and structure.
- Pattern mining: "I can take 10, 20, 30 of those documents, and I can say 'what's similar?' What is always the case here?"
- Cross-reference TTPs: ask, "What ideas from the airspace control manual help illustrate this point?"
- Style passes: tighten language, reduce jargon, and improve flow without losing technical meaning.
- Example sourcing: pull relevant history to explain tactics and frameworks.
Hallucinations happen - here's how they counter them
The most common failure isn't bad grammar. It's fake sources. McMahan's team has seen AI invent credible-sounding citations - like "a Military Review article from 1994 by a Colonel Jones" - that simply don't exist.
His rule: "It's gotta be validated. We have to have accountability for our words. I'm paid to check it."
A writer's checklist you can steal
- Define the output before you draft: audience, grade level, tone, and examples allowed.
- Keep a vetted source library. Only allow AI to summarize from documents you trust.
- Require citations with URLs/DOIs and verify each one. No link, no quote.
- Use AI for pattern discovery across reports; you make the judgment calls.
- Run a "red team" pass: ask AI to critique logic, edge cases, and missing assumptions.
- Log decisions and sources for every major claim. Future you will thank you.
A lightweight setup for content and technical teams
- Centralize your corpus: briefs, research, interviews, test results, and style guides in one repository.
- Use an enterprise AI workspace connected to that corpus for summarizing, pattern-finding, and drafting.
- Add a citation policy: require verifiable sources and maintain a source-of-truth index.
- Measure readability with AI, then edit for voice by hand. Let tools handle speed; you own the nuance.
Why this approach works for writers
It prioritizes clarity, repeatable workflows, and source discipline. AI handles the heavy lift: cleaning prose, finding patterns, and surfacing examples. Humans set direction, make calls, and protect the integrity of the work.
That's the balance: faster drafting without letting accuracy slip.
Further resources for writers
Your membership also unlocks: