Trump administration appeals ruling that blocked Pentagon from punishing Anthropic over AI dispute

The DOJ appealed a federal judge's order blocking the Pentagon from labeling Anthropic a supply chain risk and banning federal use of its Claude chatbot. Judge Rita Lin ruled the Pentagon's actions appeared arbitrary and could "cripple" the company.

Categorized in: AI News Government
Published on: Apr 04, 2026
Trump administration appeals ruling that blocked Pentagon from punishing Anthropic over AI dispute

Trump Administration Appeals Court Order Blocking Pentagon Action Against Anthropic

The Department of Justice filed an appeal Thursday in San Francisco federal court after a judge blocked the Pentagon from taking punitive measures against AI company Anthropic. U.S. District Judge Rita Lin issued the order last week, halting efforts to label Anthropic a supply chain risk and blocking enforcement of a Trump administration directive ordering federal agencies to stop using the company's Claude chatbot.

The Ninth Circuit Court of Appeals set an April 30 deadline for the Justice Department to file documents explaining why Lin's decision should be overturned.

What the Judge Ruled

Lin said the Pentagon's actions appeared arbitrary and capricious. She wrote that the "broad punitive measures" could "cripple Anthropic," particularly Defense Secretary Pete Hegseth's use of a rare military authority previously directed at foreign adversaries.

"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Lin wrote.

Lin stayed her order for one week to allow the Pentagon time to appeal. She clarified that her decision doesn't require the Pentagon to use Anthropic's products or prevent it from switching to other AI providers.

Pentagon's Response

U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, called Lin's order a "disgrace" on social media. He said it would disrupt Hegseth's "full ability to conduct military operations with the partners it chooses."

The Dispute's Origins

Trump and Hegseth announced the action against Anthropic on February 27 after contract negotiations failed. Anthropic sought to prevent its AI technology from being deployed in fully autonomous weapons or surveillance of Americans. The Pentagon argued it should be able to use Claude in any lawful manner.

Anthropic also has a separate case pending in the federal appeals court in Washington, D.C., involving a different Pentagon rule attempting to declare the company a supply chain risk.

Who Supported Anthropic

Multiple parties filed legal briefs supporting Anthropic's position:

  • Microsoft
  • Industry trade groups
  • Rank-and-file tech workers
  • Retired U.S. military leaders
  • A group of Catholic theologians

For government professionals, this case illustrates tensions between military procurement priorities and corporate policy decisions. Understanding AI for Government and how Generative AI and LLM systems operate can help clarify the technical and policy dimensions of such disputes.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)