Contracts, Not Slogans, Decide How the Pentagon Uses AI

In government AI deals, who holds the pen depends on the buying path, contract, and terms you strike. 'Any lawful use' isn't a blank check-architecture and terms still gate use.

Categorized in: AI News Government
Published on: Mar 03, 2026
Contracts, Not Slogans, Decide How the Pentagon Uses AI

What rights do AI companies have in government contracts?

The Anthropic-Pentagon dispute lit up the news cycle and flooded the zone with half-truths. The key fact most commentary missed: a vendor's rights (and your leverage) depend on the acquisition pathway, the contract type and the terms you negotiate.

After Defense Secretary Pete Hegseth pushed Anthropic to accept "for all lawful purposes," the company refused. President Trump then directed agencies to stop using Anthropic's products, and the department labeled the firm a "supply chain risk." Hours later, OpenAI announced a Pentagon deal it says respects two limits-no mass domestic surveillance, no fully autonomous weapons-while still agreeing to "any lawful use."

This isn't a novel fight over whether contractors can restrict government use. They can-and they do-under specific pathways, contracts and clauses. Your procurement choices decide who holds the pen.

How agencies actually buy AI-and why it decides the rights

  • Commercial acquisition (FAR Part 12): AI is treated like commercial software. The government buys on market terms, which means the vendor's standard license and acceptable use policy are the default. Expanded rights require negotiation and consideration. FAR Part 12.
  • License upgrades and enterprise agreements: AI shows up as an add-on (e.g., Copilot, Gemini). The base enterprise agreement governs. Changing AI terms usually means reopening the entire deal, so commercial defaults tend to stick.
  • GSA Multiple Award Schedule: Ordering agencies inherit GSA's master terms. If the master bakes in a vendor AUP, you likely can't override it downstream.
  • Negotiated procurements (FAR Part 15): Maximum flexibility to craft usage rights, data rights, transparency and governance. The tradeoff is process, protest risk and time. Many AI buys avoid it for speed.
  • Other Transactions (OTs): Non-FAR, highly flexible. Terms are whatever the parties sign. Agencies can secure broad rights; vendors can embed restrictions. In 2025, DOD used OTs with multiple model providers at large values.

So what does "any lawful use" really mean?

It's not a blank check for the government, and it's not a free veto for the vendor. Framed the way OpenAI describes, it largely restates existing authorities and protocols. That doesn't expand the law-but it can change remedies if use violates the contract, giving the vendor breach and termination leverage it wouldn't have under constitutional or statutory claims alone.

Anthropic reportedly wanted explicit carve-outs for mass domestic surveillance and fully autonomous weapons-blocking those uses even if an agency believed they were lawful. That places a private party between the government and an otherwise lawful mission use, which explains the impasse.

Some agreements also tie usage to the current versions of policies (e.g., autonomy directives). If the contract locks those "as of" a date, it can freeze stricter standards even if law or policy later loosens. Whether that holds turns on precise incorporation language.

The real control may be architectural, not legal

  • Cloud-only deployment: No model weights on edge devices. The vendor runs the safety stack and classifiers and can update them.
  • Cleared vendor personnel "in the loop": Ongoing, hands-on involvement by the provider's staff steers how the system is used over time.
  • Termination leverage: If use breaches the agreement, the provider can pursue termination-subject to notice, cure and disputes clauses.

Here's the tension: the paper says "use for all lawful purposes," but the safety stack can still block a lawful use. Which controls-the permissive clause or the deployed architecture? The answer lives in your integration, change-control and override provisions.

Why pathway choice will decide your outcomes

Commercial-first, speed-first approaches constrain both sides. Vendors struggle to add safeguards beyond market norms. Agencies struggle to get transparency, audit rights, data protections and portability. You trade time for control.

There's another risk: punishing hard bargaining with "supply chain risk" labels chills negotiation. You may get faster awards-and weaker governance-because vendors stop pushing back when you need them to.

Action checklist for contracting, program and counsel teams

  • Pick the right pathway: Document why FAR Part 12, Part 15, GSA Schedule or an OT best supports mission needs and the protections you require.
  • License scope and AUPs: Are acceptable use policies incorporated by reference? Can the vendor change them unilaterally? If you need broader usage rights, state the consideration.
  • Define "any lawful use": Do you need explicit carve-outs or acknowledgments? Are policy references fixed "as of" a date or dynamic as amended?
  • Safety stack control: Who configures, updates or disables classifiers? Is there a government override for lawful mission use? Who approves it?
  • Deployment model: Cloud-only, GovCloud, on-prem, air-gapped or edge. The choice sets practical control, data egress and continuity risks.
  • Data rights: Government purpose rights, reuse limits, training/improvement rights, segregation of logs and data deletion on exit.
  • Transparency and audit: Access to logs, model/version provenance, red-team results, incident reporting SLAs and independent verification.
  • Security and compliance: ATO boundary, FedRAMP/FedRAMP+ expectations, SBOM, supply chain disclosures and continuous monitoring.
  • Portability and exit: Anti-lock-in measures, export/transition assistance, escrow for critical artifacts and secure decommissioning.
  • Remedies: Cure periods, suspension/termination rights for both parties, and forum/ADR choices that don't stall missions.
  • Operational guardrails: Mission-specific prohibited uses, human-in-the-loop triggers and clear limits for surveillance, domestic law enforcement and autonomy.

Bottom line

AI vendors can restrict government use-if the pathway and the contract support it. Agencies can also secure the protections they need-if they stop defaulting to approaches that surrender leverage up front.

Decide the pathway first. Then write the terms that match your risk tolerance and mission before the model touches production data.

Resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)