Appeals court refuses to block Pentagon blacklist of Anthropic as separate ruling favors the company

A federal appeals court refused to block the Pentagon's blacklisting of Anthropic, even as a San Francisco judge ordered the same designation removed. The split rulings leave companies uncertain about which rules apply.

Categorized in: AI News Management
Published on: Apr 09, 2026
Appeals court refuses to block Pentagon blacklist of Anthropic as separate ruling favors the company

Pentagon Blacklist of Anthropic Stands in Appeals Court, Despite Separate Win

A federal appeals court in Washington, D.C., refused Wednesday to block the Pentagon from blacklisting artificial intelligence company Anthropic, even as the same company prevailed in a separate lawsuit in San Francisco over identical issues.

The conflicting rulings create immediate uncertainty for managers overseeing AI deployment. Anthropic challenged the Trump administration's designation of the company as a supply chain risk, arguing the move was retaliation for the company's attempt to restrict how its Claude chatbot could be used in autonomous weapons and surveillance.

What Happened in San Francisco

U.S. District Judge Rita Lin in San Francisco ruled the administration overstepped its authority. She found the supply chain risk label unjustified and forced the government to remove it, clearing the way for federal employees and contractors to continue using Claude.

The Trump administration complied with that order earlier this week, according to court filings.

The Washington Appeals Court Disagreement

The appeals court panel acknowledged Anthropic would "likely suffer some degree of irreparable harm" from the blacklist but declined to issue its own blocking order. The court cited uncertainty about the precise financial damage to the company and scheduled another hearing for May 19 to gather more evidence.

Anthropic said it remains confident courts will ultimately rule the supply chain designations unlawful.

Business Risk From Conflicting Rulings

The split decisions expose a management problem: competing legal outcomes create ambiguity about which rules actually apply. Matt Schruers, CEO of the Computer & Communications Industry Association, warned the conflicting rulings "create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI."

For managers responsible for AI strategy and vendor relationships, the case illustrates how regulatory and legal decisions can shift quickly and contradict each other. Understanding these dynamics matters for anyone evaluating AI tools for organizational use. AI for Executives & Strategy covers how to assess such risks in policy and regulatory environments.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)