Federal Judge Blocks Government's Retaliation Against Anthropic Over AI Weapons Restrictions
A federal judge temporarily blocked the Department of Defense from designating Anthropic a "supply chain risk," halting a Trump administration effort to punish the AI company for refusing to remove safeguards on its Claude system.
U.S. District Judge Rita Lin issued the preliminary injunction on March 26, preventing the government from implementing President Donald Trump's directive that all federal agencies "immediately cease" using Anthropic's technology. The order also blocked the Pentagon and Defense Secretary Pete Hegseth from applying the supply chain risk designation.
The dispute centers on contract language Anthropic refused to remove. The company's Claude system includes restrictions preventing the Pentagon from using it for autonomous weapons or mass domestic surveillance. Rather than simply end the contract, the Trump administration declared Anthropic a supply chain risk-a designation typically reserved for foreign companies-and ordered all federal agencies to stop using its products.
Anthropic sued, claiming the government violated its First Amendment rights. Lin agreed the evidence was strong. "This appears to be classic First Amendment retaliation," she wrote in the order.
Lin noted the government has legitimate authority to choose different AI vendors. "It is the Department of War's prerogative to decide what AI product it uses," she wrote. The issue is whether the government exceeded that authority by punishing Anthropic for its public stance on how its technology should be used.
The judge found the government's conduct "appears to be driven not by a desire to maintain operational control when using AI in the military but by a desire to make an example of Anthropic for its public stance on the weighty issues at stake in the contracting dispute."
Meta and Google face separate legal pressure over social media design. A Los Angeles jury found both companies liable for negligent product design that caused psychological harm to a young woman. The verdict follows a similar decision against Meta in New Mexico last week.
The cases treat social media platforms as physical products rather than speech platforms, raising concerns about free speech implications. Legal experts warn the legal theories used to hold Meta and Google liable could be weaponized against smaller platforms and effectively undermine Section 230 protections that shield online services from liability for user-generated content.
Meta's chief legal officer said the company will appeal both verdicts. "We disagree with these verdicts, respectfully," C.J. Mahoney told Fox News. "We think that they're vulnerable on appeal and we're going to pursue those appeals aggressively."
Other Developments
- OpenAI shelved plans to release an erotic chatbot following staff and investor concerns. The company is also phasing out Sora, its AI video and social media app.
- A Tennessee grandmother spent more than five months in jail after police used facial recognition to link her to crimes in North Dakota, a state she says she never visited.
- The Ohio House passed a bill defining "adult cabaret" performance to include anyone with a gender identity different from their assigned sex at birth, effectively banning drag performances in public spaces.
- U.S. clinics reported more than 11,500 gestational carrier cycles in 2023-nearly seven times the number in 2004.
For government officials overseeing AI policy, the Anthropic case illustrates the stakes of how agencies approach vendor relationships and technology restrictions. Learn more about AI for Government and AI for Legal issues.
Your membership also unlocks: