AI, Violence, and International Law: Key Insights from a Conversation with Frédéric Mégret
Autonomous systems are no longer a thought experiment. They're selecting targets, queuing strikes, and turning continuous streams of data into lethal choices. Frédéric Mégret, Professor of International Law at McGill University, argues that this shift isn't just technical-it tests the basic moral and legal architecture that legitimizes violence.
Below is a concise synthesis of his most pressing ideas, with practical implications for counsel, compliance teams, and policy leads.
The hidden violence of datafication
War now runs on data. That comes with three quiet harms: it commodifies people, it reduces them to single-use categories, and it abstracts humans into models. AI doesn't start this trend-it accelerates it and keeps the feed "always on."
For legal teams, that means evidence, targeting logic, and surveillance workflows are all part of the harm analysis. The raw materials of conflict aren't just bullets and bombs-they're datasets, labels, and decision thresholds.
Agency and accountability: the human made "innocent"
International law has leaned hard on individual responsibility and mens rea. But with complex, distributed systems, the trigger is everywhere and nowhere. The risk is structural: "the machine did it" becomes a shield.
Mégret's core point: no system is truly autonomous. Every "autonomous" decision sits on a prior human decision to enable, deploy, and set parameters. The law should keep a named, flesh-and-blood decision-maker in the loop for any system that can cause harm.
What counts as violence when machines execute it?
Violence used to be personal: humans doing harm to humans. Now, we initiate a process with its own inertia. If you launch a chain of effects you know will cause harm, you bear responsibility for foreseeable consequences-arguably more.
That framing matters for both jus ad bellum and jus in bello: authorization, proportionality, and precaution can't stop at the first click. They must extend through the predictable behavior of the system you set in motion.
Where responsibility erodes first
Diffusion is the playbook: spread decisions across teams, vendors, datasets, and software so no one owns the act. Responsibility then feels "too individualized" to stick or "too structural" to bite. Both extremes let harm pass through the gaps.
Command responsibility is the most adaptable doctrine here. If you deploy inherently dangerous tools, you take on heightened duties to supervise, prevent, and punish. That logic fits autonomous weapons as much as human subordinates.
Delegation isn't new-but the automation of deliberation is
States have always outsourced the "dirty work" to soldiers, proxies, even animals. What's new is automating the deliberation itself. Distance is no longer just physical; it's cognitive. Humans don't have to witness, compute, or consent to each decision.
That distance reduces reluctance, dulls accountability, and makes persistent, low-friction force politically easier.
Ethics without the gut check
Machines don't drink, hate, or seek revenge. That can reduce certain risks. But killing isn't lane-keeping. The absence of empathy removes the moral friction that signals "this is grave." Strip that out, and war looks like workflow.
Can AI "understand" civilian harm?
You can encode metrics for harm. You can't encode grief. Much of humanitarian restraint springs from recognition and empathy-qualities models do not have. Expect systems that execute cleaner on paper, but risk widening what society tolerates in practice.
Ban or regulate fully autonomous weapons?
Some momentum exists for treaty action, but great-power resistance is predictable. A ban that major users ignore is weak law. A minimum floor-real human oversight, traceability, and accountability-may be more achievable near term.
For reference, see current debates under the UN Convention on Certain Conventional Weapons and ICRC guidance on autonomous weapons and legal reviews:
ICRC: Autonomous weapon systems
ICRC: Legal reviews (Article 36)
Inequality: AI as a force multiplier for hegemony
High-end AI is scarce and expensive. States with access can insulate themselves from risk while projecting force more efficiently. Expect wider asymmetries and more pressure on less-resourced actors to fight in ways that look "ugly" and fall foul of IHL optics.
Bias and the machinery of structural harm
Targeting built on signatures, proxies, and shaky priors will mirror prejudice. The incentive structure is clear: maximize effect, minimize traceable liability, protect "your" personnel. And with 24/7 sensor-shooter loops, harm becomes relentless unless checked.
Do we need a new institution?
Mégret is skeptical. Without military buy-in and domestic political pressure, a new body risks symbolism. Publics must care, and armed forces must see long-term self-interest in restraint-much like the lesson of chemical weapons in World War I.
Practical moves for legal teams
- Require an Article 36-style review for all AI-enabled weapons and targeting tools, even if your state isn't party to API. Document assumptions, data sources, and foreseeable effects.
- Preserve a named human decision-maker for any lethal action. Log sign-offs, overrides, and escalation paths. No black-box strikes.
- Contract for auditability: model cards, data lineage, update history, red-team results, and post-deployment incident reporting. No deploy-and-forget.
- Write "fail-safe" clauses into procurement: strict deactivation conditions, rules for degraded environments, and immutable guardrails for protected persons/objects.
- Assess foreseeability like product liability: training data bias, operating envelopes, adversarial vulnerabilities, and human-machine interface risks (over-trust, automation bias).
- Map responsibility across the chain: developer, integrator, commander, operator, and political leadership. Define who carries what duty and when.
- Mandate real-time and after-action reviews: civilian harm tracking, sampling of "near-miss" events, and independent oversight with authority to suspend use.
- Ensure explainability at the level needed for IHL and criminal accountability. If it can't be explained, it shouldn't be fielded.
The bottom line
AI doesn't absolve us. It reveals where our legal theories thin out. Responsibility must stretch from the person who green-lights the system to the last effect it predictably causes. Keep a human in the loop, keep records, and keep the ability to say "stop."
About Frédéric Mégret
Frédéric Mégret is Full Professor and holder of the Hans & Tamar Oppenheimer Chair in Public International Law at McGill University. His work spans international criminal justice, human rights law, IHL, and the relationship between law and violence. He received an honorary doctorate from the University of Copenhagen in 2022 and was the James S. Carpentier Visiting Professor at Columbia Law School in 2024-25.
Further learning for legal teams
If your department is building AI literacy for policy, compliance, or procurement, see this curated set of programs by job function:
AI courses by job function
Your membership also unlocks: