Trump administration reverses course on AI safety reviews after powerful Anthropic model raises national security concerns

The Trump administration is reversing its opposition to AI safety reviews after Anthropic's Mythos model showed capabilities in developing cybersecurity exploits. Officials now want formal government review of powerful models before release.

Categorized in: AI News Government
Published on: May 06, 2026
Trump administration reverses course on AI safety reviews after powerful Anthropic model raises national security concerns

Trump Administration Reverses Course on AI Safety After Anthropic Model Raises Security Concerns

The Trump administration is reviving a Biden-era policy it once mocked: government review of powerful AI models before their release. The shift comes after Anthropic's latest large language model, called Mythos, demonstrated capabilities in developing cybersecurity exploits that U.S. officials now view as a national security risk.

A year ago, Trump officials dismissed AI safety concerns as fearmongering. Vice President JD Vance warned in February 2025 that excessive regulation could "kill a transformative industry." David Sacks, the White House's AI and crypto czar, called safety advocates a "doomer industrial complex" engaged in "regulatory capture strategy."

That rhetoric has shifted. The administration is now discussing an executive order to create an AI working group that would establish formal government review procedures for new models. The approach mirrors Britain's system, which requires government bodies to verify that AI models meet safety standards.

The Policy Reversal

President Trump revoked Biden's AI safety testing order on his first day in office. Three days later, he issued an order titled "Removing Barriers to American Leadership in Artificial Intelligence" that eliminated safety testing requirements.

Now the White House opposes Anthropic's plan to expand Mythos access from roughly 50 companies to 120, citing national security concerns. The National Security Agency is already using the model to search for vulnerabilities in Microsoft products.

Google, Microsoft, and xAI have all agreed to give the government early access to their models. The Commerce Department's Center for AI Standards and Innovation will handle the reviews. That agency was renamed last June-previously called the US AI Safety Institute-as part of the administration's effort to downplay safety concerns.

A Contradictory Position

The administration has created an internal contradiction. One set of officials is working to phase out Anthropic models over six months because the company refused to amend its Pentagon contract to allow "all lawful use" of its technology. Meanwhile, other officials are trying to expand government access to Anthropic's models.

The administration designated Anthropic a "supply chain risk" and continues defending that designation in court while simultaneously seeking to help agencies circumvent the legal roadblock it created.

Broader Implications

The reversal undermines the administration's push to block state-level AI regulations. Trump officials had pressured Republican lawmakers in Nebraska and Tennessee to weaken or abandon bills introducing safety and transparency requirements for AI companies. That effort now faces steeper odds.

Critics worry the administration could use any licensing regime for censorship-denying releases to models deemed "woke" or pressuring companies into favors. The risk of politicizing AI oversight is real.

Still, the administration's recognition that AI safety concerns warrant serious attention marks a departure from its dismissive stance. Federal agencies and officials now acknowledge that models are becoming more capable and more dangerous.

For government professionals working on AI policy, the shift signals that AI for Government decisions will increasingly involve formal safety evaluations. Those tasked with oversight may benefit from understanding AI Learning Path for Policy Makers, which covers governance frameworks and policy analysis relevant to these emerging requirements.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)