Trump Administration Appeals Court Order Blocking Anthropic Penalties
The Department of Justice filed notice Thursday to appeal a federal judge's decision that blocked the Pentagon from taking punitive measures against AI company Anthropic. U.S. District Judge Rita Lin issued the order last week in San Francisco federal court, halting enforcement of actions the Trump administration initiated against the company.
Lin blocked the Pentagon from labeling Anthropic a supply chain risk and blocked enforcement of a Trump directive ordering all federal agencies to stop using Anthropic's Claude chatbot. The judge said the government's actions appeared arbitrary and capricious, with potential to "cripple Anthropic."
The Core Dispute
The conflict stems from failed negotiations over a defense contract. Anthropic sought to prevent its AI technology from being deployed in fully autonomous weapons or used for surveillance of Americans. The Pentagon argued it should be able to use Claude in any way it deems lawful.
Trump and Defense Secretary Pete Hegseth publicly announced the actions against Anthropic on February 27 after those negotiations broke down.
Judge's Rationale
Lin wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."
She specifically criticized Hegseth's use of a rare military authority previously directed at foreign adversaries. The judge stayed her order for one week to allow the Pentagon time to appeal to the Ninth Circuit Court of Appeals.
Pentagon's Response
U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, called Lin's order a "disgrace" on social media. He said the ruling would disrupt Hegseth's "full ability to conduct military operations with the partners it chooses."
Broader Support for Anthropic
Multiple third parties filed legal briefs supporting Anthropic's case, including Microsoft, industry trade groups, tech workers, retired military leaders, and a group of Catholic theologians.
Anthropic has also filed a separate case pending in federal appeals court in Washington, D.C., involving a different rule the Pentagon is using to attempt declaring the company a supply chain risk.
For government professionals navigating AI for Government policy and deployment, this case illustrates the tension between agency discretion and legal constraints on federal action. Understanding Claude and similar tools requires awareness of the regulatory environment surrounding their use.
Your membership also unlocks: