Pentagon Orders Removal of Anthropic's Claude From Government Systems
The U.S. Department of Defense has ordered the removal of Claude, an artificial intelligence system developed by Anthropic, from certain government operations. An internal Pentagon memo requires the removal within 180 days.
The decision follows the Defense Department's classification of Claude as a "supply chain risk," a designation that bars Anthropic from defense contracts. Defense Secretary Pete Hegseth said the classification came after Anthropic refused to allow unrestricted military use of the system, including for surveillance and autonomous weapons applications.
The Core Disagreement
Anthropic has maintained that it will not remove safety guardrails from its AI systems. In a statement responding to Hegseth's comments, the company said that "no amount of intimidation or punishment will change our position on mass domestic surveillance or fully autonomous weapons."
The refusal reflects a fundamental conflict: the Pentagon wants broad control over how its tools function, while Anthropic has built restrictions into Claude's design that the company considers non-negotiable.
Questions About Government Authority
The move raises questions about how far the federal government can restrict private technology companies whose products are embedded in government infrastructure. Bruce Schneier, a cybersecurity expert and lecturer at Harvard Kennedy School, called the decision "a statement about the U.S. government trying to bully a supplier."
Civil liberties advocates see a broader threat. Corynne McSherry, legal director of the Electronic Frontier Foundation, said that "requiring a company to rewrite its code to remove guardrails means compelling different expressions, a clear constitutional violation."
The Technology Question
Others argue the Pentagon's caution reflects legitimate concerns about AI reliability. Current systems can produce inaccurate information and behave unpredictably in ways developers don't anticipate. "This technology is not mature," said one technology industry professional. "If they're telling you this is not something our technology can do, you probably shouldn't use it, at least not right now."
What Happens Next
Anthropic has filed a lawsuit challenging the Pentagon's decision. The company argues the designation is unjustified and could set a precedent for government overreach into private AI development.
The outcome will likely shape how future conflicts between government agencies and AI for Government developers are handled. For government professionals, the case demonstrates the emerging tensions between national security priorities and technology company autonomy.
Your membership also unlocks: