Appeals court refuses to block Pentagon from blacklisting Anthropic

A federal appeals court refused to block the Pentagon's blacklisting of Anthropic, contradicting a San Francisco judge who ordered those same restrictions removed last week. A hearing is set for May 19.

Categorized in: AI News Government
Published on: Apr 11, 2026
Appeals court refuses to block Pentagon from blacklisting Anthropic

Appeals Court Rejects Anthropic's Bid to Block Pentagon Blacklist

A federal appeals court in Washington, D.C., on Wednesday refused to block the Pentagon from blacklisting Anthropic, dealing the AI company a setback even as it prevails in a separate lawsuit over the same dispute.

The U.S. Court of Appeals rejected Anthropic's request for emergency protection while the case proceeds. The company had sought to shield itself from Pentagon restrictions on how its Claude chatbot can be deployed in autonomous weapons and surveillance operations.

The conflicting rulings create immediate confusion for government employees and contractors. A San Francisco federal judge last week ordered the Trump administration to remove the stigmatizing labels it had placed on Anthropic as a national security risk. The Washington appeals court reached the opposite conclusion this week.

What Triggered the Dispute

Anthropic filed lawsuits after the Trump administration labeled the company a supply chain risk and issued directives restricting how government agencies could use its AI tools. The company argued the administration was retaliating for its attempt to impose limits on military deployment of its technology.

The Trump administration characterized Anthropic as a liberal-leaning company attempting to dictate military policy.

The San Francisco Victory

U.S. District Judge Rita Lin ruled that the administration had overstepped its authority by blacklisting Anthropic. She found the company qualified to work with military contractors and that the restrictions could cripple its ability to compete with rivals like OpenAI and Google.

Following that ruling, the administration removed the labels and cleared the way for government use of Claude and other Anthropic chatbots, according to court filings submitted this week.

The Washington Court's Different View

The appeals court acknowledged Anthropic would "likely suffer some degree of irreparable harm" if deemed a supply chain risk. But the judges found insufficient grounds to block the administration's actions, citing uncertainty about the precise financial damage to the company.

The appeals court scheduled a hearing for May 19 to collect additional evidence.

Business Uncertainty for Government

The split decisions create practical problems for federal agencies and contractors trying to determine what tools they can use. Matt Schruers, CEO of the Computer & Communications Industry Association, warned that the conflicting rulings "create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in Generative AI and LLM."

Anthropic said in a statement it remained "confident the courts will ultimately agree that these supply chain designations were unlawful."

Government workers evaluating AI tools for agency operations should monitor the May 19 hearing and expect further clarification as the courts resolve the contradiction. The outcome will likely affect how federal agencies approach vendor selection for AI for Government applications.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)