Court Blocks Trump Administration's Move to Designate Anthropic a 'Supply Chain Risk'
A federal judge in San Francisco issued a preliminary injunction Thursday blocking the Trump administration from designating the AI company Anthropic as a supply chain risk. Judge Rita F. Lin of the Northern District of California found the government likely violated the company's free speech rights and misapplied a statute Congress intended for foreign adversaries.
The dispute centers on two restrictions Anthropic imposed on its Claude AI system when the Defense Department and intelligence agencies began using it. The company prohibits fully autonomous weapons control without human operators and bars the system's use for mass domestic surveillance.
Defense Secretary Pete Hegseth and President Trump objected to these limits. The administration argued that government officials, not tech executives, should decide how weapons systems operate. When Anthropic refused to remove the restrictions, the Pentagon designated the company a supply chain risk - a designation intended to flag foreign intelligence agencies and hostile actors that might sabotage U.S. technology.
The Court's Reasoning
Lin found that the supply chain risk statute "has never been applied to a domestic company" before this case. The law targets foreign threats, not American businesses that disagree with government policy.
The judge also identified procedural violations. Federal law requires agencies to consider less restrictive alternatives before imposing a supply chain risk designation. The administration skipped this step.
Most significantly, Lin concluded the government retaliated against Anthropic for public speech. The administration focused on CEO Dario Amodei during negotiations and after the company resisted pressure. Lin called this "classic illegal First Amendment retaliation," writing: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."
What Happens Next
Anthropic filed two separate lawsuits. Lin's ruling addresses the First Amendment claim. A second case, filed in the D.C. Circuit appeals court, challenges whether the supply chain risk designation itself was lawful under a different statute.
The administration has seven days to appeal Lin's injunction. Pentagon officials have already signaled they view the ruling as wrong and plan to continue treating Anthropic as a supply chain risk while the appeals process continues.
For government professionals, this case highlights the tension between agency authority and statutory limits, even during conflict. AI for Government training covers how these legal frameworks shape AI deployment in federal operations.
Your membership also unlocks: