GSA proposes sweeping AI contract clause covering data rights, security and vendor restrictions

The GSA proposed a contract clause March 6 that would give the federal government ownership of all data fed into AI systems and bar contractors from using that data to train their models. Comments are due March 20, 2026.

Categorized in: AI News Government
Published on: Mar 19, 2026
GSA proposes sweeping AI contract clause covering data rights, security and vendor restrictions

GSA Proposes Sweeping AI Contract Requirements for Government Vendors

The General Services Administration released a draft contract clause on March 6 that would impose strict controls on how contractors develop and use artificial intelligence for federal agencies. The proposed requirement, GSAR 552.239-7001, grants the government ownership of all data fed into AI systems and outputs generated by them, prohibits contractors from using that data to improve their models, and mandates the use of only American-made AI systems.

The GSA is accepting public comment through March 20, 2026. The short timeline means government agencies and contractors need to act quickly if they plan to weigh in.

What the Clause Requires

Data ownership. The government owns all inputs, outputs, and custom modifications made to AI systems. Contractors get only a limited license to use this data to perform the contract-nothing more. They cannot use government data to train, fine-tune, or improve any AI model for other customers or commercial purposes.

American AI systems only. Contractors cannot use foreign AI systems or AI components developed or controlled by non-U.S. entities. The clause defines "American AI Systems" as systems "developed and produced in the United States" but provides no further guidance on what "produced" means, leaving room for interpretation.

Incident reporting in 72 hours. When a contractor discovers a confirmed or suspected security incident involving government data, it must notify the Cybersecurity and Infrastructure Security Agency, the contracting officer, and other designated contacts within 72 hours. Daily updates are required until resolution. All forensic evidence must be preserved for at least 90 days.

Data handling restrictions. Contractors must implement "eyes off" procedures that limit human review of government data to only what is necessary and must log all such access. Government data must be logically separated from other customer data. When the contract ends, all government data must be securely deleted and certified in writing.

Vendor accountability. Prime contractors are directly responsible for ensuring that all their subcontractors and commercial AI vendors comply with the clause. This extends liability downstream to commercial AI platforms and models that contractors integrate into their solutions.

Unbiased AI principles. The clause requires AI systems to be "truthful," prioritize "historical accuracy, scientific inquiry, and objectivity," and operate as a "neutral, nonpartisan tool." The government reserves the right to test systems for bias and suspend their use if it finds non-compliance. The terms "performance issues" and "decommissioning costs" are left undefined, creating uncertainty about contractor liability.

What This Means for Contractors

The data ownership provisions fundamentally challenge the business models of many commercial AI providers. Contractors cannot use government data to improve their products, which limits how they can extract value from government work.

Contractors will need to renegotiate agreements with commercial AI vendors to flow down these requirements. Many vendors may resist or be unable to accept terms that prohibit them from using data for model training or that assign ownership to the government.

The "American AI Systems" requirement may force contractors to switch technologies or rebuild systems with domestic alternatives. Combined with data localization and segregation requirements, this could require significant architectural changes.

The direct liability for vendor compliance creates new risk. If a subcontractor or commercial AI platform fails to meet the clause's terms, the prime contractor bears the cost of remediation and potential termination for cause.

Smaller companies and startups may face higher barriers to entry. The compliance costs and strict requirements could disadvantage innovators who lack the resources to meet these obligations.

Ambiguities That Need Clarification

The clause leaves several critical terms undefined. "AI capabilities" has no definition, making it unclear which contracts will require compliance. The term "American AI Systems" provides no test for what constitutes "produced," which may create challenges for systems built with global data, open-source components, or international talent.

The "Unbiased AI Principles" are subjective. What counts as "truthful" or "neutral" may be contested, and the government's testing benchmarks are not specified. This creates risk for contractors who cannot know in advance whether their systems will pass government evaluation.

The clause does not define "reasonable decommissioning costs" or clarify what constitutes a "performance issue" that justifies suspension of an AI system.

What Government Agencies Should Know

If this clause is adopted, it will give agencies strong ownership rights over data and custom AI modifications. Agencies will have the ability to test AI systems independently and suspend their use for non-compliance. The government can export all data in open formats and switch vendors without being locked into a single contractor's proprietary system.

However, these protections come at a cost. Contractors will likely price in the compliance burden and the prohibition on using government data for model improvement. Agencies may see fewer vendors willing to bid on AI contracts, particularly smaller companies that lack the resources to meet these requirements.

The 72-hour incident reporting requirement is aggressive and may be difficult for some contractors to meet, particularly if they rely on third-party vendors who have their own incident response procedures.

Next Steps for Stakeholders

Anyone affected by this clause should submit comments to the GSA by March 20. Key areas to address include:

  • Clarification of "American AI Systems," "produced," and "AI capabilities"
  • Practical challenges of flowing down liability to commercial vendors
  • Whether the 72-hour incident reporting window is achievable
  • The subjective nature of "Unbiased AI Principles" and how they will be enforced
  • The impact of the data training prohibition on contractor business models

Contractors should conduct a gap analysis of their current AI offerings against the clause's requirements and review all agreements with commercial AI vendors to assess feasibility of compliance.

The GSA has indicated the clause could be included in a GSA Schedule refresh as early as spring 2026. The window for industry feedback is narrow, and detailed comments will be essential to shape a final rule that balances government needs with commercial realities.

For government professionals seeking to understand AI governance and compliance requirements, AI for Government covers public sector AI implementation and policy analysis. Those involved in policy decisions may find the AI Learning Path for Policy Makers helpful for understanding AI governance frameworks.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)