Judge Questions Trump Administration's Ban on Anthropic AI in Federal Government
A federal judge expressed skepticism Tuesday about the Defense Department's decision to bar federal agencies from using Anthropic's AI technology, calling the move "troubling" and suggesting it amounts to an attempt to punish the company for disagreeing with the administration.
US District Judge Rita F. Lin said during a hearing in San Francisco that designating Anthropic a supply-chain risk-a label typically reserved for US adversaries-doesn't align with the stated national security justification. The ban "looks like an attempt to cripple Anthropic," Lin said.
Anthropic, maker of the Claude chatbot, sued last month to block the Pentagon's declaration. The company demanded assurances that its AI wouldn't be used for mass surveillance or autonomous weapons, while the government refused to accept any restrictions on how it could deploy the technology.
Lin said she was concerned the government may be retaliating against Anthropic for speaking publicly about the dispute. She didn't immediately rule on Anthropic's request for a preliminary injunction to block the ban, but said she would decide within days.
What's at stake
Anthropic claims the ban could cost it billions in lost revenue. The company argues the legal principles involved affect any federal contractor whose views the government dislikes.
The Pentagon's lawyer argued during the hearing that trust is essential in government contracts and that Anthropic destroyed that trust by trying to dictate Pentagon policies on AI use. The government expressed concern about "future sabotage," including potential changes to AI software after deployment.
Anthropic's lawyer countered that the Pentagon can review any AI model before using it and that Anthropic has no technical ability to disable, modify, or monitor how the military uses its systems.
What managers should know
This case signals how government policy on AI governance can directly affect vendor relationships and business operations. Organizations that work with federal agencies should understand how regulatory disputes can escalate quickly and what contractual safeguards matter most.
For executives managing AI vendor relationships, the dispute highlights tensions between government control requirements and vendor concerns about misuse. Understanding AI governance frameworks and policy implications has become essential for strategic decision-making.
The case is Anthropic v. US Department of War, filed in US District Court for the Northern District of California.
Your membership also unlocks: