Pentagon's AI System Directed Over 13,000 Strikes in Iran Campaign
The U.S. military used an artificial intelligence system called Maven Smart System to select targets during Operation Epic Fury, the air and missile campaign against Iran that began Feb. 28, according to Pentagon officials. The system, produced by Palantir Technologies, collected radar signals, satellite imagery, drone footage, and electronic communications to build a single battlefield map that commanders used to identify targets and choose weapons.
The White House reported that U.S. forces struck more than 13,000 targets in Iran during the first 38 days of the campaign, including command-and-control facilities, air defense systems, and industrial sites. Officials characterized all strikes as legitimate military targets, but news outlets documented destruction of civilian buildings.
How Maven Works
Cameron Stanley, the Pentagon's chief digital and AI officer, described Maven as a visualization tool that consolidates data from multiple sources into one screen. Instead of consulting eight or nine separate systems, commanders can identify a target and select a strike package by clicking on the display.
The system generates target lists ranked by strategic importance, such as radar stations or communications nodes. After strikes occur, Maven automatically reviews damage assessments and produces updated target lists within minutes.
In late 2024, Anthropic's Claude AI was integrated with Maven to provide enhanced targeting options. Anthropic subsequently withdrew from military work due to concerns about autonomous weapons and domestic surveillance, and other AI companies have taken its place.
Speed Creates Oversight Risks
Military commanders view Maven's rapid targeting cycle as a tactical advantage. Adm. Brad Cooper, commander of U.S. Central Command, said the system allows forces to "sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react."
But the speed raises concerns about human review. Nilza Amaral, head of research at Chatham House's Global Governance and Security Centre, warned that "humans may rely too much on the system" and fail to verify recommendations. With less time available for review, the risk of errors increases.
School Strike Raises Questions
A Feb. 28 cruise missile strike on Shajareh Tayyebeh Primary School in Minab killed over 170 people, mostly students. A Central Command assessment concluded that outdated intelligence maps failed to show the school had been converted from a former military base to civilian use.
Pentagon officials attributed the error to human failure in updating maps rather than AI malfunction. The New York Times reported March 11 that officials said the error was "unlikely to have been the result of new technology."
Observers warn, however, that increased reliance on AI systems creates "automation bias," where personnel treat machine recommendations as verified facts rather than suggestions requiring scrutiny. Amaral said there is concern that targeting approvals "could end up just being a mere formality because of the automation bias, where people are just relying on what the machine is telling them."
For government professionals overseeing or implementing AI systems, understanding how automation bias affects human judgment is critical. AI for Government resources address how agencies can maintain meaningful human oversight when deploying AI in high-stakes decisions. Those working with large language models and generative AI systems can also review Generative AI and LLM fundamentals to understand how these technologies function and their limitations.
Your membership also unlocks: