D.C. appeals court denies Anthropic's bid to block Pentagon supply chain risk designation

A D.C. appeals court refused to block the Pentagon's "supply chain risk" label on Anthropic, letting the Defense Department bar the company from new contracts. A separate San Francisco injunction still prevents an outright ban on Claude.

Categorized in: AI News Legal
Published on: Apr 12, 2026
D.C. appeals court denies Anthropic's bid to block Pentagon supply chain risk designation

D.C. Court Blocks Anthropic's Challenge to Pentagon "Supply Chain Risk" Label

A federal appeals court in Washington denied Anthropic's request to halt the Defense Department's designation of the company as a "supply chain risk," allowing the Pentagon to continue operating under that assessment. The ruling came after a San Francisco court had previously granted the AI firm a temporary reprieve from an outright ban on its Claude model.

The decision creates an unusual legal position for Anthropic. While other federal agencies can still use Claude, the Pentagon's designation effectively blocks the company from pursuing new defense contracts, even though existing arrangements may continue during the litigation.

What the Pentagon's Designation Means

The "supply chain risk" label is the Pentagon's formal assessment that Anthropic poses potential security concerns related to critical infrastructure integration. The designation doesn't amount to a complete prohibition but signals caution about the company's technology in sensitive government applications.

Anthropic has argued the label is overly broad and stifles innovation. The company contends that applying such designations without clear technical justification creates uncertainty for AI developers working with government clients.

The Legal Status

The San Francisco court's temporary injunction remains in place, preventing the administration from banning Claude outright. However, the D.C. appeals court's denial of Anthropic's broader challenge means the Pentagon can maintain its risk assessment while litigation continues.

This split outcome leaves Anthropic in a holding pattern. The company faces significant legal battles ahead, with the preliminary injunction offering only a temporary shield rather than a resolution.

Implications for Government Contracting

The case raises questions about how federal agencies evaluate and communicate risk designations for technology vendors. A "supply chain risk" label can effectively exclude companies from government work without the formal process of a contract ban.

For legal professionals handling government contracts, the ruling illustrates how administrative designations can create practical barriers to business. Understanding the distinction between formal bans and risk designations is critical when advising clients on federal procurement.

The outcome may set precedent for how other agencies classify AI companies and similar technologies. As more powerful AI tools enter government systems, the standards for these designations will likely face continued legal scrutiny.

Legal professionals working in government contracts and regulatory compliance should monitor this case. The court's interpretation of agency authority to issue supply chain risk designations could affect how other technology vendors navigate federal procurement restrictions. Learn more about AI for Legal professionals or explore the AI Learning Path for Paralegals to understand how AI tools can assist with regulatory analysis and government contract review.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)