Pentagon Agrees to Use Google's AI Systems on Classified Networks
The Defense Department has reached a deal with Google to deploy the company's Gemini AI systems on classified military networks, according to a U.S. official. The agreement follows similar contracts with OpenAI and xAI as Defense Secretary Pete Hegseth pushes the military to become "an AI-first warfighting force."
Google did not disclose specific details of the contract. A company spokesperson said Google remains "committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight."
The Pentagon has been negotiating new contracts with America's four largest AI companies since July, seeking language that permits "any lawful use" of their systems. Google's agreement covers lawful uses by the Defense Department, the official said.
The Anthropic Standoff
Google's deal contrasts sharply with the Pentagon's conflict with Anthropic. The company sought stronger guarantees that its AI models would not be used for domestic mass surveillance or direct control of lethal autonomous weapons.
Defense Secretary Hegseth responded by declaring Anthropic a "supply-chain risk to national security"-a designation typically reserved for foreign adversaries. President Trump announced in late February that federal agencies would stop using Anthropic's products.
Anthropic is suing the Defense Department and federal agencies to overturn the restrictions. The case is split between courts in California, where a judge temporarily halted the removal of Anthropic systems, and Washington, D.C., where the court declined to issue a similar order.
OpenAI's Quick Adjustment
OpenAI initially announced a Pentagon deal similar to Google's but faced immediate public backlash over surveillance concerns. The company reworked the agreement within days, adding language specifying that its services "shall not be intentionally used for domestic surveillance of U.S. persons and nationals."
Brian McGrail, senior counsel at the Center for AI Safety, said intelligence agencies often interpret surveillance clauses broadly. Because these contracts remain private, he added, it is difficult to assess how strong the safeguards actually are.
Military AI Adoption Accelerates
The Defense Department has used AI systems for years-analyzing drone footage in operations against the Islamic State group, streamlining logistics, fixing military pay errors, and providing targeting support in the war with Iran.
Michael Horowitz, a former senior defense official now at the University of Pennsylvania, said the Google deal "illustrates the growing importance of AI for U.S. national security." He noted that Google's systems were already in use on unclassified networks, making the classified agreement a natural extension.
Employee Resistance Persists
Around 600 Google employees sent a letter to CEO Sundar Pichai this month urging him to reject new Pentagon AI partnerships. This echoes 2018, when thousands of Google workers protested the company's involvement in Project Maven, a Pentagon AI program operated with data analytics firm Palantir.
Google declined to renew the Maven contract after the employee backlash. Pichai said at the time the company would not pursue AI applications for "surveillance violating internationally accepted norms" or weapons designed "to cause or directly facilitate injury to people."
For government officials evaluating AI adoption, understanding these trade-offs between capability and oversight is essential. Learn more about AI for Government and explore the AI Learning Path for Policy Makers to better understand implementation strategies in national security contexts.
Your membership also unlocks: