Anthropic vs the Pentagon: Who sets the rules for AI in war?

Anthropic's clash with the Pentagon lays bare a fight over AI rules in war. With DPA threats and 'any lawful use' demands, contracts may set how far autonomy and surveillance go.

Published on: Feb 27, 2026
Anthropic vs the Pentagon: Who sets the rules for AI in war?

Anthropic v. the Pentagon: What this public feud says about AI in warfare

The public standoff between the US Department of Defense (DoD) and its AI supplier Anthropic is unusual. These two are typically aligned, especially on national security. Tensions rose after media reports alleged Anthropic's technology was used in the January 2026 abduction of former Venezuelan president NicolΓ‘s Maduro by US forces - claims Anthropic denies. Now the DoD is pressing hard: relax your ethical limits or face extraordinary measures.

US defense secretary Pete Hegseth has threatened to invoke the Defense Production Act (DPA), compelling access to Anthropic's models, while also hinting the firm could be labeled a supply chain risk. It's a carrot-and-stick move that signals urgency and a willingness to create pressure through ambiguity.

The core dispute: Anthropic's red lines vs "any lawful use"

Anthropic's acceptable-use policy for Claude bans two things outright: mass surveillance of US citizens and fully autonomous weapons that select and engage targets without humans. The company frames these as baseline conditions for responsible AI. The DoD argues those limits are too tight for a messy security environment.

A January 9, 2026 memo from Hegseth instructed that "any lawful use" should appear in future DoD AI contracts within 180 days. It also warned against "ideological tuning" in AI systems that could interfere with responses. The subtext is clear: the Pentagon wants maximum operational discretion.

Why this is happening now

Claude already handles writing, coding, reasoning, and analysis across government interfaces, and Anthropic currently has an edge at higher clearances. But competitors see an opening. Palantir has expanded defense work, adding more AI options. Google updated its guidelines to allow AI work on weapons and surveillance, OpenAI revised its mission language, and xAI agreed to the DoD's "any lawful use" standard.

Put bluntly: values are becoming a differentiator. The winner will be the vendor that meets mission needs while staying inside political and legal guardrails - or successfully persuades government to move the guardrails.

Anthropic at a testing point

On February 24, Anthropic updated its responsible scaling policy, dropping a prior pledge to hold releases until risk mitigations were guaranteed. Leadership suggested that unilateral commitments make less sense as competitors race ahead. That's a signal: ethics language remains, but hard constraints are getting softened under market pressure.

This is the test. Principles are easy in press releases. They are expensive in procurement fights, high-stakes operations, and quarterly earnings.

The legal lever: the Defense Production Act

The DPA lets the government prioritize, allocate, and in some cases compel access to private-sector capabilities for national defense. It's a powerful tool - and contentious when applied to dual-use AI. Compelling supply while labeling a firm a risk sounds contradictory, but it increases leverage in negotiations and reshapes vendor incentives.

Defense Production Act (CRS overview)

What this means for government, science, and research professionals

If you work in government, procurement, national labs, or policy research, the signal is clear: ethical guardrails for military AI will be contested in contracts, not just in conferences. Expect sharper clauses, faster oversight cycles, and vendor switching.

  • Contracts: Define "human-in-the-loop" in measurable terms. Specify decision points, escalation paths, and override authority.
  • Testing and evaluation: Require pre-deployment red-teaming for targeting, escalation, misinformation, and model manipulation risks.
  • Data governance: Lock down training data lineage, audit logs, and incident reporting within classification limits.
  • Operational use: Establish playbooks for AI-assisted ISR, targeting support, and command-and-control - with clear abort criteria.
  • Supplier risk: Track viability, policy drift, and model changes that affect performance or compliance.
  • Interoperability: Plan for split-sourcing and model swaps; no single point of failure.
  • Public accountability: Prepare summaries of safeguards for legislative oversight and allies without exposing sources and methods.

For officials building capability responsibly, see AI for Government. For those shaping policy, the AI Learning Path for Policy Makers covers governance, risk, and regulation essentials.

The gray areas no one wants to own

Anthropic bans fully autonomous weapons, but not AI that accelerates kill chains with humans nominally "in the loop." The difference on paper can disappear under time pressure. Similarly, banning surveillance of US citizens leaves broad scope for foreign mass surveillance - legally cleaner, ethically fraught.

These are not hypotheticals. They are design decisions: latency budgets, alert thresholds, confidence scores, and who signs off when comms are degraded.

Key questions leaders should ask now

  • Where does "assistance" end and "autonomy" begin in our mission profiles? How do we measure it?
  • What is our minimum viable definition of human control under electronic attack and time-critical targeting?
  • Which missions need model diversity to reduce correlated failure?
  • What incidents trigger automatic suspension of model access or a forced human review?
  • How do we prove compliance to auditors without exposing sensitive tactics?
  • If DPA authority is exercised, what governance countermeasures protect against misuse?

International context

States met again in early February to discuss "responsible AI" in military use, without a unified US position. On March 2-6, the UN will convene discussions on limiting lethal autonomous weapons. Whatever emerges will influence contract terms, export controls, and alliance interoperability - even if it's soft law.

UN discussions on lethal autonomous weapons (LAWS)

What to watch next

  • Whether the DoD inserts "any lawful use" language across new AI contracts within 180 days.
  • Vendor shifts: more firms aligning to Pentagon standards, or staking out distinct ethical limits as a market play.
  • Procurement hedging: agencies preparing for DPA scenarios and supplier churn.
  • Congressional oversight: clarity on limits for AI-enabled targeting, surveillance, and information ops.
  • Allied stances: NATO partners and key Indo-Pacific allies setting their own guardrails that affect joint operations.

Bottom line

This fight isn't "ethics versus security." It's about who controls the rules of engagement for AI, and how those rules show up in code, contracts, and command authority. Anthropic appears to be pushing back, while also adjusting to competitive pressure.

If you're responsible for government tech decisions, build for accountability now: crisp contract language, measurable human control, auditable logs, and tested fail-safes. Hope is not a control. Policy on a slide is not a safeguard. The standard you enforce in procurement becomes the standard on the battlefield.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)