AI-Enabled Drones, State Responsibility, and Upholding the Rule of Law in Modern Warfare

AI-enabled drones challenge international law and ethics, risking sovereignty violations and civilian harm. Urgent reforms are needed to ensure accountability and uphold the rule of law.

Categorized in: AI News Legal
Published on: Jun 03, 2025
AI-Enabled Drones, State Responsibility, and Upholding the Rule of Law in Modern Warfare

AI-Enabled Drones, State Responsibility, and the Rule of Law: Legal and Ethical Imperatives

The rise of unmanned aerial vehicles (UAVs), especially those powered by artificial intelligence (AI), is reshaping armed conflict. Originally used for surveillance, these drones now play a central role in cross-border military strikes and targeted killings. By 2023, at least 19 countries had conducted drone strikes, with many others acquiring such technology. This trend presents serious legal and ethical challenges for states, testing international law, human rights, and the principles of the rule of law.

Legal and Ethical Challenges Posed by AI-Enabled Drones on States

AI-enabled drones raise significant issues under the UN Charter, international humanitarian law (IHL), and international human rights law (IHRL). A primary legal concern is the violation of state sovereignty. Many drone strikes, such as those carried out by the US in Pakistan, Yemen, and Somalia, happen without the host state's consent, potentially breaching Article 2(4) of the UN Charter, which prohibits force against another state's territorial integrity.

States often justify such strikes by invoking Article 51, the right to self-defense. However, these claims frequently lack clear evidence of an imminent threat or necessity. The absence of transparent proportionality assessments and insufficient disclosure of targeting decisions undermine core IHL principles like distinction, proportionality, and precaution.

Ethically, autonomous weapon systems (AWSs) are particularly problematic. Operating without full human oversight, these systems cannot reliably distinguish civilians from combatants, especially in irregular conflicts. This raises the risk of unlawful harm and IHL violations. Moreover, AWSs lack intent or moral reasoning, complicating accountability for war crimes, which under IHL requires knowledge or intent. This creates a legal void where responsibility becomes unclear or diffused.

EU member states face distinct legal and ethical responsibilities. While the EU AI Act excludes military uses, it stresses transparency, human oversight, and risk management—principles that align with IHL's Article 36 weapons reviews. However, the absence of binding defense standards risks dual-use proliferation and weakens the EU’s leadership in digital ethics.

The normalization of UAV warfare also lowers the threshold for hostilities by reducing physical and political costs of deploying drones. This “riskless warfare” weakens deterrence and erodes the ethical duty to avoid unnecessary conflict. Such practices attract criticism and damage states’ multilateral reputations, especially when civilian casualties go unacknowledged.

Cybersecurity vulnerabilities add another layer of concern. AI-enabled drones can be targeted by integrity or availability attacks that corrupt targeting data, leading to unlawful strikes. These risks increase if drones connect to nuclear command systems. Failure to secure AI systems is both a legal and moral failure, especially when civilian lives are at stake.

Military automation’s socioeconomic effects also require attention. As drones become more autonomous, the need for human operators decreases, leading to job losses in defense sectors. Without retraining programs, this trend may deepen social inequality and alienate communities, creating longer-term political and ethical challenges.

Finally, a gap between ethical commitments and military AI deployment practices erodes state credibility. Many claim to uphold human rights and ethical governance, yet the disconnect between policy and practice weakens public trust.

Relationship to the Rule of Law

The challenges posed by AI-enabled drones highlight a deeper crisis in the rule of law. This principle demands transparency, accountability, and equal application of law—even by powerful states. Yet, drone programs often operate in secrecy, with unilateral legal interpretations and uneven enforcement of IHL and IHRL norms. This undermines legal constraints and moral responsibility.

When AI systems are used to bypass legal review or diffuse human accountability, foundational doctrines like individual criminal responsibility and state responsibility lose effectiveness. This erosion threatens the integrity of the international legal order.

Pathways for Improvement: Legal and Institutional Reform

Addressing these issues requires comprehensive reform at international and regional levels. States invoking self-defense for drone strikes should submit detailed reports under Article 51, including legal justifications, evidence of imminent threats, proportionality assessments, and targeting data. These reports must be publicly accessible.

The UN Secretariat could establish a platform similar to the Treaty Series to publish such submissions, enhancing transparency and accountability. A UN Special Rapporteur or Panel on AI and Targeted Killings should monitor compliance with IHL and IHRL, reporting regularly to the General Assembly and Security Council.

The UN Security Council should reform its procedures to circulate Article 51 submissions automatically and require legal reviews through the Office of Legal Affairs to prevent vague or unsubstantiated claims. Fact-finding bodies and Commissions of Inquiry must be empowered to investigate AI-enabled drone strikes, with their findings carrying legal weight in Security Council deliberations.

A new binding protocol under the Convention on Certain Conventional Weapons (CCW) should clearly define autonomous UAVs, restrict targeted killings, mandate human oversight, and require post-strike investigations.

At the EU level, legislation should ban fully autonomous lethal drones and enforce necessity and proportionality standards, with compliance audits by the European Defense Agency. A European Military AI Ethics Council composed of legal, technical, and civil society experts should review UAV operations under the Common Security and Defense Policy (CSDP).

Violations should lead to funding suspensions and arms export bans. Transparency can be enhanced by publishing an annual White Paper on Military AI and UAV use. The European Parliament’s oversight role should expand, and civil society watchdogs must receive public funding to ensure independent legal scrutiny.

Conclusion

AI-enabled drones have outpaced the legal and ethical frameworks meant to regulate them. Gaps in accountability, transparency, and moral responsibility weaken the rule of law and challenge state legitimacy. Immediate reforms are necessary to reassert legal and ethical boundaries around drone warfare.

This requires institutional innovation, sustained public oversight, and a firm commitment to human dignity, lawful conduct, and global accountability.

For legal professionals interested in the intersection of AI and international law, staying informed on these developments is critical. Further resources on AI policy and ethical standards can be found at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide