GSA's Draft AI Contract Rules: What Federal Teams Need to Know Now
The General Services Administration is proposing baseline contract terms that would give agencies broad rights to use, integrate, and assess vendor AI systems for any lawful government purpose. Under the draft, vendors would grant an irrevocable, royalty-free, non-exclusive license for the duration of the contract. Agencies could also embed the tech into existing systems as needed, without arbitrary limits from the vendor.
The proposal further states an AI system "must not refuse to produce data outputs or conduct analyses based on the contractor's or service provider's discretionary policies." The intent is clear: prevent vendors from blocking government queries through private content policies or ideological filters.
What's in the draft
Operational rights. Agencies get broad usage rights, including integration into federal IT environments, for "any lawful" purpose across the contract term. That shifts leverage toward government missions over vendor-imposed constraints.
Model response restrictions. Vendors couldn't set discretionary rules that cause the model to refuse outputs or analyses. Safety remains a requirement, but private policy preferences wouldn't trump a lawful government use case.
Neutrality requirements. The draft calls for outputs that prioritize historical accuracy, scientific inquiry, and objectivity. It specifically says the AI should be a neutral, nonpartisan tool and cites diversity, equity, and inclusion principles as examples of ideological content it does not want systems to favor.
Continuous improvement and oversight. The government could run automated assessments for bias, truthfulness, safety, and ideological content. Systems that fail requirements could be suspended, and vendors may be on the hook for decommissioning costs if they violate the draft's "unbiased AI principles."
Context: the Anthropic dispute
These guidelines follow a dispute between the Department of Defense-rebranded as the Department of War by the Trump administration-and Anthropic. Anthropic declined to loosen safeguards that prohibit uses like fully autonomous weapons systems or mass domestic surveillance.
In response, President Donald Trump barred federal agencies from using Anthropic's AI tools. Dario Amodei said the company believes "AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely." Defense Secretary Pete Hegseth later directed the department to classify Anthropic as a supply chain risk. Anthropic has sued over the designation and the resulting ban.
Why this matters for federal teams
- Leverage in acquisition. The licensing baseline reduces vendor lock-in and content-policy vetoes that can stall mission work.
- Mission vs. safety. The "no refusal" clause increases access to outputs but could weaken vendor safeguards, raising operational risk if not paired with strong agency guardrails.
- Integration at scale. Agencies would have clearer rights to embed models into existing systems, which accelerates pilots and ATO pathways.
- Government-run testing. Built-in authority to assess bias, truthfulness, safety, and ideological tilt gives agencies more evidence for oversight and contract enforcement.
- Vendor pool effects. Some firms may exit the market if terms force them to remove key safety policies or accept higher decommissioning risk.
Immediate actions for agencies
- Build a cross-functional review cell (procurement, counsel, privacy, security, mission owners) to track the draft and prep for RFP language.
- Map your top AI use cases to "lawful government purpose" and document prohibited applications to avoid scope creep.
- Define minimum safety and red-teaming requirements that vendors must meet independent of their discretionary policies.
- Plan for government-run assessments: decide metrics, test datasets, logging, and reporting cadence upfront.
- Clarify data rights for prompts, outputs, fine-tuning artifacts, and evaluation datasets.
- Pre-negotiate suspension and remediation procedures, including timelines, evidence thresholds, and who pays for decommissioning.
- Align integration requirements with your ATO path: sandbox environments, API security, model update controls, and incident response.
- Draft performance clauses with measurable outcomes (accuracy on defined tasks, latency, uptime, failure handling, and auditability).
Open questions to resolve before award
- How will "neutral, nonpartisan" be defined and measured across diverse missions and datasets?
- What exceptions, if any, allow a model to refuse outputs for legal, safety, or classification reasons?
- Who owns derivative models or tuned weights created under the contract?
- What independent audits will be accepted, and how often?
- How will FOIA, discovery, and records retention apply to training data, logs, and assessments?
- What's the process for handling model updates that alter behavior mid-performance period?
- How are costs shared for remediation, red-teaming, and decommissioning triggered by noncompliance?
Civil liberties advocates warn the draft could weaken safety safeguards by forcing models to answer regardless of risk and by mandating adherence to "unbiased AI principles." Quinn Anex-Ries called the package "an overall detriment to advancing key safeguards in AI systems," adding that the net effect could both remove guardrails and deter responsible vendors from federal work.
For deeper background, see GSA's AI initiatives and NIST's risk guidance:
GSA AI Center of Excellence
NIST AI Risk Management Framework
If you're standing up internal capacity, these resources can help:
AI for Government - adoption, governance, and integration practices for public sector teams.
AI Learning Path for Procurement Specialists - vendor evaluation, licensing, risk, and compliance skills for acquisition professionals.
The draft isn't final, but the direction is clear: more operational freedom for agencies, more accountability for vendors, and more responsibility on federal teams to run safe, measurable deployments. Prepare your templates, testing plans, and governance now so you can move fast the moment the rules land.
Your membership also unlocks: