Pentagon signs AI deals with Google, Microsoft, OpenAI, and others for classified military operations
The U.S. Department of Defense has signed agreements with seven major technology companies to integrate artificial intelligence into classified military computer networks. The deals involve Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX.
The Pentagon said the partnerships aim to "augment warfighter decision-making in complex operational environments." Military personnel are already using AI tools through the Pentagon's GenAI.mil platform, with officials reporting that these capabilities are cutting tasks from months to days.
What the Pentagon is using AI for
The Defense Department has accelerated AI adoption in recent years to speed up battlefield decision-making, improve logistics, and assist with weapons maintenance. The department said expanding these capabilities would "give warfighters the tools they need to act with confidence and safeguard the nation against any threat."
Anthropic excluded over contract disputes
Notably absent from the agreements is Anthropic, which has been in a public dispute and legal battle with the Trump administration over military AI use. Anthropic sought contractual guarantees preventing its technology from being used in fully autonomous weapons systems or domestic surveillance of Americans.
Defense Secretary Pete Hegseth rejected those conditions, insisting the Pentagon retain authority for any use deemed lawful. Anthropic later sued after Trump attempted to block federal agencies from using the company's Claude chatbot and after the Pentagon considered labeling it a supply chain risk.
OpenAI takes Anthropic's place
OpenAI confirmed that Friday's announcement formalized an agreement first revealed in March, effectively replacing Anthropic in classified AI environments. The company said in a statement: "We believe the people defending the United States should have the best tools in the world."
OpenAI previously said its Pentagon agreement includes safeguards requiring human oversight in certain AI-assisted operations.
Concerns about autonomous weapons and oversight
The Pentagon's expanding use of AI has intensified debate over ethics, privacy, and autonomous weapons. Critics warn that AI systems could eventually be used to select battlefield targets or expand surveillance capabilities.
One agreement reportedly includes language requiring human oversight whenever AI systems act autonomously or semi-autonomously. The same agreement states that AI tools must operate in ways consistent with constitutional rights and civil liberties.
Concerns over military AI usage gained attention during Israel's wars in Gaza and Lebanon, where U.S. technology firms reportedly provided AI-powered systems used to track targets. The conflicts resulted in high civilian casualties, fueling criticism that AI-assisted warfare could contribute to deaths of innocent people.
What this means for operations professionals
For those managing military operations, understanding how these AI systems work is becoming essential. Personnel using these tools should familiarize themselves with AI for Operations and Generative AI and LLM concepts, as these will directly affect how decisions are made and executed in operational environments.
The Pentagon's rapid deployment of these systems means operations teams will need to understand both the capabilities and limitations of AI-assisted decision-making in real time.
Your membership also unlocks: