Federal Agencies Roll Out AI Strategy Plans: Takeaways for Government Contractors
OMB's AI playbook is here in two parts: M-25-21 (use and governance) and M-25-22 (acquisition). Covered agencies were told to publish AI strategies by September 30, 2025, and finalize detailed AI use and procurement policies by December 29, 2025.
Many agencies have now published their strategies, signaling a clear direction: accelerate adoption, keep risk proportional, and document everything. For contractors and grant recipients, this means new expectations, tighter review, and more oversight across the AI life cycle.
Quick hits
- Expect faster AI adoption inside agencies, with guardrails for high-impact uses.
- Policies due December 29, 2025, will define documentation, review, and contract terms for AI used in federal work.
- Contractors will need clear acceptable-use policies, stronger data governance, and human oversight for high-stakes decisions.
- Agencies will push for AI products and services developed and produced in the United States.
M-25-21: Accelerate adoption with guardrails
Agency strategies converge on a common stack: scalable AI infrastructure, quality data, an AI-ready workforce, and risk governance proportional to impact. AI must operate within existing security and compliance boundaries.
Examples point to the direction of travel. DHS is moving to continuous authorization models to enforce secure-by-design pathways. DOE has built a data governance structure with leadership roles and boards that enforce standards and traceability.
Workforce readiness is a priority. Agencies are investing in AI literacy for all personnel and hiring for specialized roles like ML engineering, evaluation, ethics, and cybersecurity. GSA is running cross-agency training and internal showcases to spread practical adoption.
For "high-impact AI" (where outputs drive legal, material, or binding decisions), M-25-21 requires minimum safeguards: pre-deployment testing, impact assessments, human oversight, continuous monitoring, and appeal mechanisms. CAIOs are central here-NARA is maintaining an AI Use Case Inventory and limiting waivers to exceptional cases, while VA and CFPB leaders can suspend or terminate AI uses that miss the mark.
What this means for your team now
Expect parallel obligations for contractors as agencies scale AI internally. Start with acceptable-use policies that bar entering client-sensitive or CUI into unapproved tools, and map your controls to your partner agency's minimum practices.
If AI is used in hiring, promotion, termination, or other high-impact decisions, ensure compliance with federal anti-discrimination law, EEOC guidance on algorithms, and OFCCP requirements. If you use AI for monitoring or productivity scoring, factor in NLRA risks, privacy laws, and wage-and-hour exposure.
Adopt human-in-the-loop review for high-stakes outputs, test for and remediate adverse impacts, provide notice and consent where required, and offer reasonable accommodations for applicants and employees interacting with AI systems. These steps will reduce friction when agencies ask for evidence.
M-25-22: AI acquisition rules you'll see in contracts
M-25-22 applies to contracts awarded under solicitations issued on or after September 30, 2025. Expect clauses that prohibit using non-public government data to train publicly or commercially available AI without explicit consent.
Contracts will need clear terms on IP rights, data ownership, portability, and long-term interoperability. Agencies are also directed to maximize the use of AI products and services developed and produced in the United States.
By December 29, 2025, covered agencies must update internal acquisition procedures to align with M-25-21 and M-25-22. Practically, this will define how contractors must identify AI used in performance (especially where FCI or CUI is processed), the minimum documentation for high-impact use cases, recordkeeping for training and evaluation methods, and sourcing preferences or restrictions.
Implications and a practical playbook for contractors
- Match governance to risk. Expect layered approvals, independent validation, and formal risk acceptance for high-impact AI. Low-risk uses should move faster, but high-stakes decisions will face tighter scrutiny.
- Secure the AI architecture. Agencies are signaling preference for enterprise pathways (e.g., secure testbeds, API gateways). Centralize access, logging, and controls to speed approvals.
- Build public-facing transparency. Maintain AI inventories, document waivers, and prepare summaries that can be shared with agencies and, where required, the public.
- Instrument continuous monitoring. Log inputs, outputs, model versions, and human overrides. Establish retention schedules and routine reevaluation tied to business and legal risk.
- Prep for sourcing scrutiny. Be ready to show origin, ownership, supply chains, and security posture, with a focus on U.S.-developed and produced tools where feasible.
Contracts and compliance: what to standardize now
- AI acceptable-use policy that prohibits entry of sensitive client data or CUI into unapproved tools.
- Data governance that enforces standards, lineage, and access controls across training and evaluation data.
- Documentation packets for high-impact AI: testing plans, impact assessments, human oversight design, appeal mechanisms, and monitoring dashboards.
- Clear IP, data ownership, and interoperability positions for your proposals and negotiations.
- Vendor and model risk reviews covering security, privacy, bias, and provenance.
Prepare your workforce
Agencies are investing in literacy and role-based competence. Contractors that do the same will move faster through reviews and reduce rework.
If you need a quick way to upskill teams by role, explore practical programs here: AI courses by job.
Bottom line
Agencies are building a consistent AI approach: secure platforms, data discipline, workforce readiness, guardrails for high-impact uses, and transparency. The December 29, 2025, policy deadline will lock in expectations and show up in your solicitations and awards.
Get ahead now: keep an AI inventory, strengthen data governance, implement human oversight, pressure-test models, and align your policies with your partner agency's strategy. Early movers will avoid costly retrofits and be ready when new clauses land in the FAR and agency supplements.
Your membership also unlocks: