How the Army Is Using AI to Speed Up, Clean Up, and Clarify Promotion Boards
The Army Human Resources Command has introduced artificial intelligence into noncommissioned officer boards to quickly narrow the field to those who meet clear, job-relevant thresholds. The goal: make boards more efficient, smarter, and more transparent without replacing human judgment.
"The first thing to understand is that we are not using it to replace humans," said Col. Tom Malejko, chief talent analytics officer at Human Resources Command, during the Association of the U.S. Army's annual meeting in Washington. "We're using it very broadly to augment their decision-making."
The team built a "naive" AI screen that ignores names, branches, and ranks. Instead, it flags whether someone has reached a certain command level or completed required schooling-objective markers that predict readiness for promotion.
This has been especially useful in sergeant first class evaluation boards, where every NCO at that rank must be evaluated and assigned a merit list score. Many aren't competitive yet, and the AI helps board members focus their time on those who are.
Before acting on any recommendation, a team reviews every step the model took to check for bias and confirm the logic holds up. Human oversight is built in, end to end.
"Our approach is to work our way through our noncommissioned officer boards first, learn from them and then pilot from those," Malejko said. "Based on what we've learned, go back to Congress and ask for additional authorities so we can actually execute them within our officer boards, since Congress ultimately controls those responsibilities."
The Army has also used an AI-like algorithm for four years to determine which officers should be invited to a promotion board, Maj. Gen. Hope Rampy said. Early on, more than 30 officers were missed and had to be manually added; now that's down to about three to five each year as the model is retrained and refined.
Next up: a program to search the Army's personnel and pay system for specific mission-related skills. Because soldiers can add languages, certifications, and even hobbies to their talent profiles, the system could surface skills beyond someone's primary job.
What HR Leaders Can Apply Today
- Use clear, objective screens first. Filter on proven thresholds (e.g., required certifications, role level, time-in-grade) before deeper review.
- Keep humans accountable. Require a documented review of model logic and edge cases before any decision is final.
- Pilot with one population. Start small (one role or level), learn, and scale only after you see consistent gains and low error rates.
- Audit outcomes annually. Track false negatives (missed talent), retrain the model, and publish improvements.
- Be transparent about criteria. Clearly state what the AI screens and what the board decides.
- Build a living skills inventory. Let employees self-report skills and interests; validate with assessments and use for internal mobility and staffing.
- Limit sensitive inputs. Remove names and other identifiable attributes from model inputs to reduce bias risk.
- Train your board members. Teach reviewers how the model works, where it can fail, and how to challenge it.
Technical Practices Worth Mirroring
A simple "qualifications first" screen reduces noise and keeps the focus on readiness, not reputation. It won't fix everything, but it sets a clean baseline.
Bias checks matter. Review step-by-step model decisions, run disparate impact tests, and keep a log of overrides and why they happened.
Want a framework to anchor this work? See the NIST AI Risk Management Framework for practical guidance on governance and controls. NIST AI RMF. For context on the event where these updates were shared, visit the Association of the U.S. Army. AUSA.
Getting Your Organization Ready
- Map role-specific thresholds that predict readiness (education, certifications, performance history).
- Start with a rules-based screen before moving to more complex models.
- Create a cross-functional review panel (HR, legal, DEIA, analytics) to approve model use and changes.
- Instrument metrics: time saved, accuracy, fairness indicators, appeal rates.
- Unify HRIS data with a self-reported skills layer and connect it to internal gigs and staffing requests.
If your HR team needs practical upskilling in AI screening, bias controls, and workflow automation, explore role-based courses here: Complete AI Training: Courses by Job.
Bottom line: keep the criteria objective, keep humans in charge, and keep improving the model with post-board reviews. That's how you get speed and fairness without sacrificing trust.
Your membership also unlocks: