Trump Administration Weighs Stricter AI Vetting and Supply Chain Controls
The White House is deliberating a 16-page executive order that would require companies to submit advanced AI models for government review before release and prohibit the private sector from "interfering" with federal AI use. A White House spokesperson said any official announcement would come directly from President Trump, characterizing current discussions as "speculation."
The administration signed agreements Tuesday with Microsoft, Google DeepMind, and xAI to voluntarily submit models for national security evaluation ahead of public release. These deals mark a shift from the hands-off approach the White House previously favored at the urging of venture capitalists David Sacks and Marc Andreessen.
The Anthropic Standoff
The policy recalibration stems partly from a standoff between the Defense Department and AI company Anthropic. The company refused to allow the military to use its Claude model for surveillance or autonomous weapons systems.
In March, Defense Secretary Pete Hegseth designated Anthropic a supply chain risk to national security - an unprecedented restriction that barred federal agencies from using Anthropic products. The proposed executive order would create more aggressive contracting and termination standards for federal vendors, making such refusals harder to sustain.
Mythos Changes the Calculus
Anthropic's unreleased Mythos model has altered White House thinking on AI security. Early government testing shows the model can identify and exploit software vulnerabilities faster than human hackers, raising alarm among Trump administration officials.
The cybersecurity threat prompted the administration to work recently toward lowering tensions with Anthropic. Two people familiar with discussions said the White House is considering a board to review the supply chain risk designation against the company.
The proposed order would also establish technical guidelines for securing open-weight models-those with public training parameters that users can adapt. The administration is weighing whether to involve the intelligence community in securing systems against advanced AI models.
Industry Pushback Mounts
Tech representatives are expressing concern that stricter government controls will slow innovation. Daniel Castro, president of the Information Technology and Innovation Foundation, warned that requiring government approval for each new model version would undermine competitive advantage against China.
"We've seen the speed of Silicon Valley, we've seen the speed of Washington, and they operate at very different paces," Castro said.
Saif Khan, a former emerging technology adviser in the Biden administration, said Mythos has shifted the conversation. "Before that, I think there was dismissiveness. Now many folks are taking this quite seriously," Khan said. "The pure, Silicon Valley venture-capital type of approach to AI policy just might be over in the Trump administration."
Broader Policy Concerns
The vetting discussions occur as the administration addresses wider public skepticism about AI. A recent poll found broad concern over the technology, including industry spending on political races.
The contemplated executive orders represent one of several actions the White House is considering to address AI security risks and limit industry influence over government policy demands. The details remain in flux, according to people familiar with the deliberations.
For government professionals navigating AI policy and implementation: Understanding both the security rationale and industry concerns is essential. AI Learning Path for Policy Makers covers the governance and policy considerations shaping federal AI strategy. AI for Government resources address how these policy shifts affect federal operations and procurement.
Your membership also unlocks: