Pentagon signs AI deals with Google, Microsoft, Amazon, Nvidia, OpenAI, Reflection and SpaceX for classified military systems

The Pentagon signed AI deals with Google, Microsoft, Amazon, Nvidia, OpenAI, and others to deploy the technology on classified military networks. Anthropic was excluded after refusing to drop safety conditions on autonomous weapons use.

Published on: May 02, 2026
Pentagon signs AI deals with Google, Microsoft, Amazon, Nvidia, OpenAI, Reflection and SpaceX for classified military systems

Pentagon Signs AI Deals With Seven Tech Companies for Classified Military Systems

The Defense Department reached agreements Friday with Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX to deploy artificial intelligence on classified military networks. The contracts will let the military use AI to speed up battlefield decisions and manage weapons maintenance and supply chains.

Notably absent is Anthropic, the AI company that has publicly disputed the Trump administration over the ethics and safety of using AI in warfare. The company sued after Defense Secretary Pete Hegseth sought to block federal agencies from using Anthropic's Claude chatbot and label the company a supply chain risk.

What the Military Plans to Do With AI

The Pentagon has accelerated its AI adoption in recent years. The technology can reduce the time needed to identify and strike targets, help predict when equipment needs maintenance, and determine whether vehicles on surveillance feeds are civilian or military.

Military personnel are already using AI capabilities through a platform called GenAI.mil. The Pentagon said warfighters, civilians, and contractors are "cutting many tasks from months to days."

The Anthropic Dispute

Anthropic wanted contractual guarantees that the Pentagon would not use its technology in fully autonomous weapons or to surveil Americans. Hegseth rejected those conditions, saying the military must retain the right to use any tools it deems lawful.

OpenAI, which announced its Pentagon deal in March, confirmed Friday it was the same agreement. One company's contract includes language requiring human oversight for autonomous or semi-autonomous missions and stating that AI tools must comply with constitutional rights and civil liberties protections.

Emil Michael, the Pentagon's chief technology officer, told CNBC that working with multiple providers was necessary. "When we learned that one partner didn't really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers," Michael said.

Unresolved Questions About Military AI

Helen Toner, interim executive director at Georgetown University's Center for Security and Emerging Technology, said the Pentagon still needs to work out how much human involvement is appropriate and how to train operators to avoid over-relying on AI systems.

"A lot of modern warfare is based on people sitting in command centers behind monitors, making complicated decisions about confusing, fast-moving situations," Toner said. "AI systems can be helpful in terms of summarizing information or looking at surveillance feeds and trying to identify potential targets."

She warned of automation bias, where people assume machines work better than they actually do. Operators need training to understand both the capabilities and limits of AI tools.

Concerns about military AI intensified after Israel's wars in Gaza and Lebanon, where U.S. tech companies quietly provided targeting tools. The number of civilian deaths rose sharply, raising questions about whether AI contributed to civilian casualties.

Learn more about Generative AI and LLM Courses and AI for IT & Development to understand the technical foundations behind these military systems.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)