Catholic ethicists back Anthropic in Pentagon fight over autonomous weapons, mass surveillance

Catholic ethicists back Anthropic's suit, saying mass surveillance and autonomous targeting cross moral lines. They insist on real human control.

Categorized in: AI News Government
Published on: Mar 15, 2026
Catholic ethicists back Anthropic in Pentagon fight over autonomous weapons, mass surveillance

Catholic ethicists back Anthropic in court fight over Pentagon AI uses

Fourteen Catholic moral theologians and ethicists filed an amici curiae brief on March 13 supporting Anthropic in its lawsuit against the U.S. Department of War (Department of Defense). The filing argues Anthropic acted as a responsible corporate citizen by keeping guardrails against AI for mass surveillance and fully autonomous targeting.

Anthropic sued the Pentagon on March 9 after a Feb. 27 directive from President Donald Trump instructing agencies to stop working with the company over disagreements on acceptable AI use. The scholars say the dispute is not a supply chain risk, but a disagreement over ethical limits that should matter to government decision-makers.

What the brief argues

The brief, grounded in Catholic social teaching and AI's technical realities, was authored substantively by Charles Camosy, Joseph Vukov, Brian J.A. Boyd, and Brian Patrick Green. Their position: mass surveillance of Americans and AI systems that can "select and engage targets without meaningful human oversight" cross core moral lines.

Mass surveillance: privacy and subsidiarity

The scholars align with Anthropic's objection to mass surveillance based on the dignity of the person and the Church's teaching on privacy. They cite the Catechism's principle that one is not bound to reveal the truth to someone without the right to know, and Pope Francis' 2023 call for an international treaty to regulate AI and curb a surveillance society.

They also point to subsidiarity, drawing on Pope Pius XI's Quadragesimo Anno: moving monitoring power to a distant center weakens human agency, invites AI-driven bureaucracy detached from local context, and can be a step toward totalitarianism. Local judgment, context, and accountability suffer when a centralized system sits over everyday life.

Autonomous weapons: human judgment is non-negotiable

On lethal autonomy, the brief argues that AI-directed weapons cannot meet the just war conditions of proportionality and noncombatant immunity without particular human judgments. Prudence is not pattern matching, and removing meaningful human oversight removes the very thing that makes lethal force morally assessable.

Beyond Catholic thought, they warn that autonomy blurs agency, accelerates kill chains beyond realistic human control, and shifts responsibility onto machines. Accountability must remain traceable to human decision-makers, especially in matters of life and death.

The scholars go further than Anthropic's technical rationale. While Anthropic's CEO Dario Amodei said current frontier systems are not reliable enough for fully autonomous weapons and the company will not knowingly endanger warfighters or civilians, the brief rejects lethal autonomous weapons even if someday "perfectly reliable."

Why this matters for government professionals

For procurement, policy, and oversight teams, this case puts concrete boundaries on the table: privacy, subsidiarity, and human-on-the-loop requirements for any use of force. Expect higher scrutiny on AI contracts that touch surveillance, targeting, or high-velocity decision support.

Operational considerations to reduce risk

  • Contracting: Spell out prohibited uses (mass surveillance of U.S. persons; autonomous target selection/engagement) and define "meaningful human control."
  • Oversight: Require audit logs, model cards, and decision-traceability to named human authorities for all sensitive deployments.
  • Privacy: Conduct Privacy Impact Assessments and data minimization reviews; restrict secondary use of communications data without due process.
  • Governance: Stand up cross-functional review for AI use-cases that affect civil liberties or rules of engagement; include local stakeholders where impacts are felt.
  • Testing: Mandate independent red-teaming for misidentification, bias, and escalation risks; simulate edge cases before fielding.
  • Operations: Establish pause/abort protocols that default to human authority during uncertainty, degraded comms, or model inconsistency.
  • Accountability: Clarify decision rights and liability in policy and ROE; avoid designs that hide behind "the system decided."
  • Supply chain: Vet vendors for explicit guardrails on surveillance and autonomy; align incentives to safety and compliance benchmarks.

Context and further reading

Primary sources cited in the brief include Pope Pius XI's encyclical on social order and Pope Francis' message urging an international AI treaty. For quick reference:

For practical guidance on public-sector AI governance and procurement guardrails, see AI for Government.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)