Pentagon removes Anthropic's AI from military operations as rivals move to fill the gap

The Pentagon canceled its $200M contract with Anthropic and ordered Claude removed from military systems within six months after a dispute over who controls the AI's use restrictions. Google and OpenAI are moving in to fill the gap.

Categorized in: AI News Government
Published on: Mar 19, 2026
Pentagon removes Anthropic's AI from military operations as rivals move to fill the gap

Pentagon Removes Anthropic's AI From Military Operations Amid Political Dispute

The Pentagon ordered Anthropic to remove its AI systems from military operations within six months, creating an opening for competing firms to reshape how artificial intelligence integrates into U.S. defense. The decision follows a dispute between Anthropic's leadership and the Trump administration over who controls restrictions on the technology's military use.

An internal Pentagon memo revealed that Anthropic's Claude AI was deployed in classified operations involving nuclear weapons, ballistic missile defense, and cyber warfare. The system was also likely used in U.S. operations against Iran, according to sources familiar with military AI deployment.

How AI Accelerates Military Operations

The military now processes roughly 1,000 potential targets daily and strikes the majority of them, with turnaround time under four hours. Retired Navy Admiral Mark Montgomery said this compression from days to hours represents a significant shift in how campaigns operate.

"A human is still in the loop, but AI is doing the work that used to take days of analysis - and doing it at a scale no previous campaign has matched," Montgomery said.

The Pentagon uses AI much like commercial users do: to summarize vast amounts of information. AI analyzes documents, video, and images from the battlefield to help war-game scenarios, minimize casualties, and identify effective weapons.

The volume of data available has made AI essential. Cameras, smartphones, and connected devices flood modern battlefields with information that no room of human analysts could process on relevant timelines. AI algorithms sift through this data to build targeting packages, assign strike assets, and assess damage in near-real time.

When hundreds of drones and missiles arrive within hours, as in air defense scenarios, no human team can decide in real time which ones to intercept and when. That task now falls to AI systems.

Claude's Role in Classified Systems

Claude is the only large-scale AI system operating on the Defense Department's classified networks. A source directly familiar with Claude's military capabilities said the system's primary function is sifting through intelligence reports-synthesizing patterns, summarizing findings, and surfacing relevant information faster than human analysts could.

The targeting process remains human-driven. Anthropic's usage policy allows the Defense Department to use Claude for analyzing foreign intelligence, but requires humans to make all decisions on military targets.

CBS News could not independently verify whether Claude was used in a February 28 strike on a girls' school in Iran for which the U.S. was likely responsible.

Physical Weapons Still Dominate the Battlefield

AI doesn't operate in isolation. Aircraft carriers, drones, and missiles from legacy contractors like Northrop Grumman, Boeing, and Lockheed Martin remain the primary weapons. AI analyzes data before those weapons are deployed, but does not fly planes or fire missiles.

Traditional defense contractors supply 98% of weapons used in current operations. Montgomery said war could still be fought without AI, but it would be "less desirable." The technology's role will likely grow with each campaign.

The Contract Dispute and Legal Battle

In July, the Pentagon signed a $200 million contract with Anthropic to integrate Claude into its systems. The Pentagon canceled the contract after disagreements over who controls restrictions on the AI's military use.

Anthropic is now suing the federal government, alleging retaliation for protected speech. Microsoft and workers from OpenAI and Google have filed supporting briefs in the case.

The Pentagon has a six-month window to remove Anthropic's products from its systems. Despite the supply chain risk designation, the department continues using Claude in Iran operations during this transition period.

Competitors Move In

Google announced it is rolling out AI agents for non-classified military uses. OpenAI CEO Sam Altman posted about deploying ChatGPT models in the Pentagon's classified network following Anthropic's fallout with the Defense Department in late February.

OpenAI's deal includes language on three restrictions: no autonomous lethal weapons, no mass surveillance of Americans, and no high-stakes automated decisions without human oversight.

Government workers managing procurement and AI integration should prepare for rapid shifts in which vendors supply military AI systems. The current dispute signals that policy disagreements between tech firms and the Pentagon will shape which platforms get deployed next.

Learn more about AI for Government Training or explore Claude AI Courses & Certifications to stay current with systems now central to defense operations.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)