Defense Contractors Hide Behind AI Models While Civilians Die
Palantir, Google, Amazon, Microsoft, OpenAI and Anthropic are not technology companies supplying militaries with tools. They are defense contractors whose systems sit inside the kill chain, selecting targets and determining who gets killed. The distinction matters because it determines what rules apply to them.
A classified Israeli military database reviewed by multiple news organizations showed that of more than 53,000 deaths recorded in Gaza, roughly 17% were named Hamas and Islamic Jihad fighters. The remaining 83% were civilians. These numbers do not reflect precision targeting. They reflect a system where imprecision is the aim.
The mechanics are straightforward. In Gaza, an algorithm processed phone records, movement patterns, social connections and behavioral signals for every person in the territory. It produced a ranked list of names with probability scores indicating the likelihood each person was a combatant. Humans then reviewed each name for an average of 20 seconds - long enough to confirm the target was male - before approving it. One system alone generated more than 37,000 targets in the first weeks of the war.
This is not a human analyst identifying a known militant and programming a weapon to hit them. The AI was inferring identities statistically across an entire population, generating targets no human had individually assessed before they appeared on the list.
The Minab School Strike
At the start of the US-Israeli Iran campaign, a strike hit the Shajareh Tayyebeh elementary school in Minab, southern Iran. At least 168 people were killed, most of them children aged seven to 12. Munitions experts described the targeting as "incredibly accurate" - each building individually struck, nothing missed.
The problem was not execution. It was intelligence. The school had been separated from an adjacent Revolutionary Guard base by a fence and repurposed for civilian use nearly a decade earlier. Somewhere in the targeting cycle, that fact was never updated.
Two sources confirmed to NBC News that Palantir's AI systems, which draw in part on large language model technology, were used to identify targets in Iran. Whether or not every strike was AI-assisted, the tempo of the campaign - striking 1,000 targets in the first 24 hours - was only possible because targeting had been substantially automated.
The Accountability Problem
International humanitarian law requires that a commander make every reasonable effort to verify a target is a legitimate military objective. It also requires that everything feasible be done to protect civilians. That obligation cannot be delegated to a system whose reasoning is opaque and whose outputs cannot be interrogated in real time.
When verification times for AI-assisted targets are measured in seconds, the conversation is no longer about human judgment with algorithmic assistance. It is about rubber-stamping a machine's output.
AI targeting systematically destroys three conditions that accountability frameworks require. Attribution dissolves across engineers, commanders, operators and corporate suppliers, each of whom can point to another. Reasoning disappears into a probability score no lawyer can audit and no court can cross-examine. Process collapses into a 20-second approval of a machine recommendation.
The companies that built and sold the system sit entirely outside the legal framework. International humanitarian law was designed for states and their agents. Palantir is not a signatory to the Geneva conventions.
How the Market Works
These companies are not obscure startups. Palantir, founded with early CIA funding, is one of the primary AI infrastructure providers to the US military. Google signed Project Nimbus, a cloud-computing and AI contract with the Israeli government and military worth more than $1 billion, despite significant internal employee protest. Amazon is a co-signatory. Microsoft had deep integration with Israeli military systems before partially withdrawing under pressure in 2024 - at which point the data migrated to Amazon Web Services within days.
Anthropic, which supplies Claude to Palantir's systems, attempted to resist Pentagon pressure to remove ethical constraints on its use for targeting. The Pentagon responded by threatening to cut ties and turning to OpenAI and others instead. The market for killing at scale does not lack for suppliers.
OpenAI, which until recently prohibited military use in its terms of service, quietly removed that restriction in early 2024 and has since pursued Pentagon contracts. Anduril, founded by Palmer Luckey and staffed heavily with former US defense officials, builds autonomous weapons systems explicitly designed for lethal targeting.
Palantir spent close to $6 million lobbying Washington in 2024. In one quarter of 2023, it outspent Northrop Grumman. A consortium of Palantir, Anduril, OpenAI, SpaceX and Scale AI was described by its own participants as a project to supply a new generation of defense contractors to the US government.
The Regulatory Vacuum
The EU AI Act, the most ambitious attempt yet to govern artificial intelligence, explicitly exempts military and national security applications. It designates international humanitarian law as the appropriate framework - the same body of law being systematically destroyed by these systems.
In the United States, the AI provisions of the 2025 National Defense Authorization Act do not regulate military AI. They direct agencies to adopt more of it. The regulatory culture has not failed to catch up with the technology. It has decided deliberately not to try.
The only serious government intervention in AI military capability has come not from a state demanding restraint or accountability, but from the US demanding the systems be made more lethal.
What Regulation Requires
Banning these systems outright is impossible when so many of the actors involved care little about international law. But pressure points remain.
The EU has tools through export controls and procurement conditions on dual-use systems that move between commercial and defense markets. International courts are beginning to open doors: the ICJ advisory opinion on Palestinian rights has created a framework in which companies supplying systems used in unlawful strikes face potential liability exposure.
AI firms need governments not just as customers but as the providers of computing power, energy and physical infrastructure that frontier AI requires. No company can sustain that from commercial revenues alone. That dependency gives states willing to use it real leverage over companies that would prefer not to be regulated.
What regulation should look like is relatively straightforward. AI systems used in targeting must be explainable - not via probability score but reasoning that a lawyer can audit. The cumulative civilian cost of AI-assisted campaigns must be assessed as a whole. Liability that stops at the operator must extend up the supply chain to the companies that knowingly built and sold opaque systems for armed conflict.
These are not novel demands. They are the minimum conditions for the laws of war to mean anything in the age of algorithmic targeting.
The soldiers who fired into darkness during the second intifada were at least present in it. The companies that built what replaced them are doing it from Palo Alto, at no personal risk, with no legal exposure, and with every incentive to do it again.
Learn more about AI for Government and AI for IT & Development to understand how these systems work and what oversight looks like.
Your membership also unlocks: