Stop the use of AI in war until laws can be agreed
Frontier AI is too unreliable, too opaque, and too fast for the rules we have. That's the core problem. Until there are clear, binding laws, AI should not be used for mass surveillance or in fully autonomous weapons.
Recent strikes in the Middle East underscored how close AI now sits to the battlefield. Militaries want speed: faster target prioritization, split-second counter-fire, less back-and-forth with command. But speed without accountability is a liability, not an edge.
International humanitarian law already demands distinction and proportionality: verify targets, avoid indiscriminate harm, minimize civilian risk. Those duties apply to AI systems just as they do to any other weapon or sensor. If a model can't meet those standards reliably and demonstrably, it has no business in combat. See the basics of these rules at the ICRC's overview of international humanitarian law.
A growing split between AI labs and defense
Anthropic refused to allow its models to be used for mass surveillance or weapon systems without human oversight, arguing that's what the technology can "reliably and responsibly" support today. The US Department of Defense cancelled the contract, labeled the company a "supply chain risk," and Anthropic sued.
OpenAI accepted a version of the Pentagon's terms. In response, more than 100 OpenAI employees and nearly 900 at Google signed a public letter urging both companies to reject such uses. OpenAI's head of robotics resigned, saying lethal autonomy without human authorization and surveillance without judicial oversight crossed lines that didn't get the debate they deserved.
Many researchers joined these firms expecting strong boundaries. Those walls moved. In early 2024, OpenAI removed a "no military and warfare" clause from its policy; Google dropped commitments against surveillance and weapons uses. Voluntary promises are easy to edit. Law isn't.
Voluntary pledges won't hold under pressure
There's a playbook for dangerous tech. Nuclear, chemical, and biological weapons drew legal lines after early, chaotic development. The Convention on Certain Conventional Weapons (CCW) exists to address emerging systems, including lethal autonomous weapons. Progress has been slow due to political disagreements and unresolved definitions of "autonomy," but the forum is there. Learn more at the United Nations CCW.
An independent UN scientific panel has been appointed to bring evidence to the table. The US DoD plans a working group with military, government, and frontier lab leaders. The right outcome is clear: draw bright lines, then enforce them.
What scientists and research leaders can do now
- Back a moratorium: no AI for mass surveillance and no target selection/engagement without meaningful human control, until binding rules and verifiable safeguards exist.
- Publish and enforce red lines in model and product policies: no lethal autonomy, no domestic warrantless surveillance, no biometric targeting in conflict zones.
- Gate high-risk capabilities: use API-level restrictions, human-authorization checks, logging, and revocation. Ship model cards with explicit wartime-use limits.
- Build IHL-aligned evaluations: test for target misclassification, escalation risks, and failure under adversarial conditions; require third-party audits before deployment near conflict.
- Ensure accountability: require a named human decision-maker for any lethal effect; add auditable "kill switches" for rapid deactivation.
- Use your leverage: coordinate open letters, internal escalation, and if needed, refusal to ship features that undermine IHL. Procurement and talent pressure move policy.
- Universities and funders: require dual-use reviews, public risk assessments, and disclosure of defense ties as grant conditions.
- Journals and conferences: strengthen ethics disclosures; decline work that enables lethal autonomy without enforceable safeguards.
What a treaty should settle
- Clear definitions: what counts as "autonomous" in target selection and engagement; what "meaningful human control" requires in practice (time, context, and authority to veto).
- Prohibited uses: indiscriminate mass surveillance, biometric identification for targeting, automated retaliation, and deployment without validated performance bounds.
- Verification and oversight: standardized evaluations, audit rights, incident reporting, and penalties for non-compliance.
- Responsibility and liability: unbroken chains of command accountability across developers, integrators, and operators.
Treaties take time. AI moves fast. That's exactly why the pause matters. If we don't set the rules before deployment scales, we inherit the failures at speed.
If you work at the intersection of research and policy, consider upskilling to influence these decisions with confidence: AI Learning Path for Policy Makers.
Start the legal work now. Keep AI out of mass surveillance and lethal autonomy until the guardrails are real-and enforceable.
Your membership also unlocks: