U.S. Army CGSC Quality Assurance Office advances continuous improvement and educational excellence with AI
Fort Leavenworth, Kansas - The Command and General Staff College is putting practical AI to work to keep professional military education sharp and relevant. The Quality Assurance Office is leading the effort, turning large-scale feedback into fast, actionable decisions that improve learning and outcomes across the college.
AI workflows that turn data into decisions
QAO built new AI data workflows that analyze surveys and student writing in a fraction of the time. Tested with the Department of Distance Education in leadership and history common core courses, the workflows quickly surface trends from curriculum outcome surveys, reduce bias in analysis, and support sentiment and predictive assessments to better allocate resources. The result: broader, more timely feedback that reflects the full learning environment.
"We do a 2- and 4-year graduate survey, reported twice a year to the command so they understand the impact the curriculum has had at those career milestones," said Dr. Forrest Woolley, director, QAO. "We also gather feedback from senior leaders and general officers, as well as pre-command course battalion and brigade commanders to understand their perceptions of CGSC graduates." Streamlined efforts pushed survey response rates past 90%, giving leaders the data needed to guide program improvements.
Woolley emphasized the human role behind the tools. "We still have to read and review all of the outputs to validate AI is giving us the correct narrative and findings."
Guided Analytical Recommended Feedback (GARF): faster, richer feedback
Developed in-house by Woolley and Dr. Thom Crowson, GARF launched in August and helped instructors double grading output in a single course during the furlough. It is not a grading program. Instead, it generates 2-3 pages of rubric-based, individualized feedback in about six seconds, which instructors review and adapt to each student. Instructors reported roughly a 55% reduction in time spent analyzing and grading papers, with more time freed for mentoring and class preparation.
"It's still the instructor's responsibility to read the feedback, review the paper, and then adjust the feedback before determining the grade," Woolley noted. Crowson added that well-crafted prompts-built specifically for the course rubric-drive quality and accuracy, with output assessed at 90-95% accuracy. "Information generated by GARF should be treated as a resource rather than definitive guidance. When paired with the judgment of a seasoned military professional, it becomes a powerful tool to support student learning," he said.
This approach gives students clearer, more relevant input while helping them develop the critical skills expected of field grade officers. It also supports faculty amid resource constraints and growing mission demands.
System-wide improvements beyond the classroom
QAO also launched a new Army-wide survey system, led the Army's first Triennial Program Review, and set rigor factors for leader information. These standards give the institution a shared way to measure, review, and act on data for continuous improvement.
For more on GARF, look for "Achieving cognitive overmatch through human AI teaming" in the Field Artillery Professional Bulletin on December 17, 2025.
What education leaders can apply today
- Build AI-assisted review loops: Use AI to summarize large feedback sets (surveys, reflections, writing), then validate with faculty review.
- Keep the rubric at the center: Map prompts to specific criteria; require instructors to review and personalize feedback before grading.
- Run frequent, brief surveys: Pair short pulse checks with deeper 2-4 year follow-ups to capture outcomes and impact over time.
- Measure what matters: Track accuracy, time saved, and student learning gains-not just usage.
- Protect quality and equity: Use bias checks, manual spot reviews, and clear data standards to maintain trust.
Quick start checklist
- Inventory current feedback channels (surveys, LMS data, writing assignments).
- Select a secure AI environment that supports your data governance.
- Create prompt libraries aligned to your rubrics; pilot in one course.
- Compare faculty time-on-task, feedback depth, and student outcomes pre/post.
- Train faculty on review practices and how to calibrate AI-generated feedback.
Learn more about the institution at CGSC. If you're building similar workflows in your program, you may find curated options for educator-focused AI training at Complete AI Training - courses by job.
Your membership also unlocks: