AI Labs Must Pass Safety Review to Bid on U.S. Contracts, Advocacy Group Says
The Trump administration should require artificial intelligence developers to pass security reviews before releasing new models and bar those that fail from winning government contracts, according to Americans for Responsible Innovation.
The group sent a letter to administration officials Monday outlining the proposal. The push comes as the White House assesses risks from Anthropic's Mythos model, which could simplify and accelerate complex cyberattacks.
What the Requirements Would Cover
Companies would need to demonstrate their models cannot easily enable cyberattacks or weapons development to qualify for federal work. The Center for AI Standards and Innovation, which already reviews some models through voluntary agreements with OpenAI, Anthropic, Google, Microsoft, and xAI, should lead the effort, the group said.
Congress should create a permanent enforcement office within the Department of Commerce to oversee compliance, the letter states.
Who Would Be Affected
The requirements would apply to companies spending $100 million or more annually on compute to train frontier models, or generating at least $500 million yearly in AI product revenue. California adopted similar thresholds for safety reporting requirements last year.
The Broader Context
The proposal reflects growing tension between the AI industry's speed and government's security concerns. Tying contract eligibility to safety reviews creates financial incentive for compliance without banning any models outright.
For government officials overseeing AI procurement, the proposal signals a shift toward formal vetting processes that could reshape vendor selection criteria across federal agencies.
Learn more about AI for Government and Generative AI and LLM capabilities and policy considerations.
Your membership also unlocks: