Trump White House reverses course on AI safety as powerful new model prompts calls for government vetting

The Trump administration is weighing a mandatory pre-release approval process for AI models, similar to FDA drug review. The shift follows Anthropic's Mythos model, which can identify decades-old security flaws.

Categorized in: AI News Government
Published on: May 10, 2026
Trump White House reverses course on AI safety as powerful new model prompts calls for government vetting

White House Reconsiders AI Vetting After Powerful New Model Raises Safety Questions

The Trump administration is reconsidering its hands-off approach to AI regulation after Anthropic released Mythos, a model capable of identifying decades-old security vulnerabilities. The shift marks a departure from the White House's stated commitment to light-touch oversight and U.S. competitive advantage.

National Economic Council Director Kevin Hassett suggested Wednesday that the administration may issue an executive order requiring government approval before companies release new AI models. He compared the potential process to FDA drug approval.

"We're studying possibly an executive order to give a clear roadmap to everybody about how this is going to go and how future AIs that also potentially create vulnerabilities should go through a process so that they're released in the wild after they've been proven safe, just like an FDA drug," Hassett said.

Tech Industry Pushback and Mixed Signals

Hassett's comments sparked immediate concern from AI companies and policy analysts who saw the proposal as a reversal of Trump's deregulatory stance. The White House has spent months preempting state AI laws it views as overly restrictive.

Hours after Hassett's remarks, White House Chief of Staff Susie Wiles wrote on X that the administration is "not in the business of picking winners and losers." She said the goal is to "ensure the best and safest tech is deployed rapidly to defeat any and all threats."

A White House official later said that "discussion about potential executive orders is speculation" and that policy announcements will come directly from Trump. The official added that "there is no shifting messaging" and that the White House continues to "balance advancing innovation and ensuring security."

Policy experts at the Cato Institute warned that pre-approval systems could give federal officials a "kill switch" to suppress speech and stifle innovation. They noted that the Biden administration faced similar criticism for proposing comparable oversight in its AI executive order.

Voluntary Testing Already Underway

AI companies have already agreed to share models with the government voluntarily. The National Institute of Standards and Technology's Center for AI Standards and Innovation has evaluated models from OpenAI and Anthropic since 2024.

This week, Google DeepMind, Microsoft, and xAI agreed to submit their models for government testing before release. The voluntary approach sidesteps the regulatory concerns raised by mandatory vetting.

The Mythos Question

Anthropic did not release Mythos publicly, but the model's ability to identify security flaws quickly presents a dual-use problem. It could help institutions patch vulnerabilities faster, or it could empower hackers to exploit those same flaws.

Treasury Secretary Scott Bessent told Fox Business that the model represents "a step change in the power of one large language model" and that the government expects similar advances from other companies.

Bessent acknowledged the tension between innovation and safety but framed it as manageable. "Imagine if China or some non-state actor were ahead of us," he said. "What we're determined to do is work with our AI companies to allow them to continue to innovate. But our charge in the U.S. government is maintaining safety."

Leadership Transition and Shifting Priorities

The administration's approach to AI has shifted since David Sacks, Trump's AI and crypto czar, left the White House earlier this year. Sacks favored minimal AI regulation.

Treasury Secretary Bessent and other officials now oversee the issue. Vice President Vance told major AI leaders in April that "we all need to work together" on AI policy, according to the Wall Street Journal.

Anthropic CEO Dario Amodei met with White House officials in mid-April, less than two months after Trump directed federal agencies to stop using the company's technology. The Pentagon had labeled Anthropic a supply chain risk-a designation typically reserved for foreign adversaries. The Pentagon has shown little interest in reconciliation despite the White House's apparent shift.

Broader Context

The White House's changing stance reflects growing public concern about AI. A Quinnipiac poll released in late March found that 80 percent of U.S. adults expressed concern about the technology.

For government officials evaluating AI policy, understanding the technical capabilities and risks of models like Mythos is essential. Those responsible for federal AI governance may want to explore AI Learning Path for Policy Makers or resources on AI for Government to stay informed on these rapidly evolving regulatory questions.

Trump has shifted positions on major tech policy before. During his first term, he pushed to ban TikTok; in his second term, he negotiated to preserve it. He dismissed cryptocurrency as a "scam" before embracing it during the 2024 campaign. The pattern suggests the administration's final position on AI vetting remains unsettled.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)