Federal agencies test Anthropic's hacking AI despite Pentagon ban on the company

Federal agencies are quietly testing Anthropic's Mythos hacking tool even as the Trump administration has banned the company from most government contracts. The model can find and exploit unknown software vulnerabilities before patches exist.

Categorized in: AI News Government
Published on: Apr 15, 2026
Federal agencies test Anthropic's hacking AI despite Pentagon ban on the company

Government Quietly Tests Anthropic's Hacking Tool Despite Trump Ban

Federal agencies are actively testing Anthropic's Mythos AI model for cybersecurity work even as the Trump administration has barred the company from most government contracts. The Commerce Department's Center for AI Standards and Innovation is running security tests on Mythos, and staff from at least three congressional committees have requested briefings on the model's capabilities, according to people with knowledge of the effort.

The disconnect reveals a fundamental tension within the administration: political pressure to punish Anthropic collides with urgent national security needs.

The Ban and Its Workaround

In late February, President Trump and Defense Secretary Pete Hegseth directed federal agencies to stop using Anthropic's technology. Hegseth then designated the company a supply chain risk - an unprecedented move against a U.S. AI firm that effectively blocks it from Department of Defense contracts.

The designation followed Anthropic CEO Dario Amodei's refusal to allow the Pentagon to deploy the company's models for autonomous lethal attacks or mass surveillance of Americans. Trump called Anthropic's leadership "Leftwing nut jobs" on social media.

But the ban has not stopped government testing. Anthropic confirmed it made Mythos available for "the government's own testing and evaluation." Researchers at the Commerce Department's center are currently assessing the model's capabilities and risks through what's known as red-teaming security tests.

What Mythos Does

Mythos can identify and exploit unknown software vulnerabilities - flaws that hackers could weaponize before patches exist. Anthropic restricted the model's release to a select group of tech and cyber organizations because of this risk.

The Treasury Department is also seeking to use Mythos to find unknown flaws in its networks, according to reporting from Bloomberg. Congressional aides expressed frustration that the government isn't deploying the technology more aggressively to defend against attacks from Russia, China, and other adversaries.

One congressional aide said the Pentagon had "shot itself in the foot by giving the middle finger to the most capable AI provider."

The Litigation Factor

Anthropic sued the government last month over the supply chain risk designation, filing cases in two separate federal courts. A judge in Northern California paused part of the government's action, while the D.C. Circuit Court of Appeals temporarily upheld it.

Legal experts say the split ruling may have inadvertently enabled federal agencies to access Mythos. A lawyer at the Institute for Law and AI said federal agencies "would not have been allowed" to test the model if the California judge had ruled against Anthropic.

The Chilling Effect

The administration's public attacks have discouraged open collaboration, according to a former senior national security official. Government agencies considering large-scale cybersecurity work with Anthropic - which would require teams of software engineers and substantial investment - face pressure not to openly engage with the company.

The White House said it "continues to work and engage with AI companies to ensure their models help secure critical software vulnerabilities" and is "proactively engaging across government and industry."

The Timing Problem

Anthropic projects that other companies will develop equivalent hacking capabilities within two years. That timeline concerns former national security officials who see Mythos as a rare advantage.

Glen Gerstell, a former general counsel at the National Security Agency, said he hopes "the current tensions between the Pentagon and Anthropic don't get in the way of something critically important to cyber security."

The CIA has signaled it will not defer to Anthropic's ethical positions. Deputy CIA Director Michael Ellis said the agency will "not let private companies dictate how and when the CIA will make lawful use of their technologies."

For federal workers managing cybersecurity, the situation presents a practical reality: the most capable tool for finding vulnerabilities remains available through government testing channels, even as official policy treats the company as a security risk. Learn more about AI for cybersecurity roles and AI policy affecting government agencies.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)