US Used Anthropic's Claude in Caracas Operation to Capture Maduro, WSJ Reports

WSJ: Pentagon used Anthropic's Claude in a Caracas raid that ended with Nicolás Maduro in U.S. custody on drug charges. Its role is classified, sparking policy fights.

Categorized in: AI News Operations
Published on: Feb 15, 2026
US Used Anthropic's Claude in Caracas Operation to Capture Maduro, WSJ Reports

Report: Pentagon Used Anthropic's Claude During Operation to Capture Venezuela's Maduro

According to a Wall Street Journal report, U.S. defense forces deployed Anthropic's AI model Claude during a January raid in Caracas that led to the capture of then-Venezuelan President Nicolás Maduro, who was transferred to New York to face drug-trafficking charges. The report, citing people familiar with the matter, says the model was made available on classified systems via Anthropic's partnership with Palantir.

The specific role Claude played remains classified. Sources speculated it could have supported intelligence synthesis, satellite imagery interpretation, or real-time recommendations during the raid.

Policy Tension: Safety Rules vs. Mission Requirements

Anthropic's usage policies prohibit supporting violence, designing weapons, or conducting surveillance. Despite this, the tool was reportedly used in the operation. An Anthropic spokesperson said, "Any use of Claude-whether in the private sector or across government-is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance."

The report also noted internal Pentagon debates about canceling a $200 million contract signed with Anthropic last summer. Defense Secretary Pete Hegseth reportedly said in January the Pentagon would not work with AI models that "won't allow you to fight wars," referencing discussions with Anthropic.

Anthropic CEO Dario Amodei has previously urged stronger AI regulation and warned against autonomous lethal use and broad domestic surveillance. The Pentagon has been pushing leading AI firms to deploy models on classified networks with fewer user restrictions; most vendors still operate only on unclassified admin networks. As reported, Anthropic is currently the only major model accessible in classified settings through third-party integrations-though the government remains bound by Anthropic's guidelines.

Why this matters for operations leaders

  • Mission vs. model policy: Vendor safety rules can constrain operational options at critical moments. You need clear alignment on permissible use before go-time.
  • Classified deployment readiness: Getting AI into secure environments requires integration partners, authority to operate (ATO), data pathways, and tested fallbacks.
  • Human-in-the-loop by default: In high-stakes actions, AI should propose, humans decide. Define decision rights, verification steps, and pause conditions.
  • Auditability: You'll need logs that survive classification protocols-who prompted what, which data sources were used, and how outputs informed actions.
  • Vendor risk and contingency: If a provider restricts a use mid-operation, what's your plan B? Build dual-vendor or offline inference options where feasible.
  • Policy drift monitoring: Product and policy changes happen without notice. Assign owners to track updates, test impact, and revalidate approvals.

Signals to watch

  • Contract shifts: Renewals, pauses, or conditional ATOs tied to AI safety clauses.
  • Model features for secure ops: Offline modes, stricter logging, or classification-aware controls.
  • Public guidance: DoD policy updates, oversight hearings, or new AI ethics directives.
  • Provider positions: Any change in Anthropic's usage policies or enforcement practices.

Practical next steps for your team

  • Codify allowed use: Map each operational use case to model policy language. Document red lines and required approvals.
  • Build guardrails: Standard prompts, retrieval boundaries, and pre-approved data sources. Test for hallucinations and sensitive inferences.
  • Exercise under pressure: Run time-boxed simulations with red teams. Validate that humans can override or downshift to simpler tooling.
  • Harden observability: Set up prompt/output capture, source-of-truth tagging, and immutable logs that meet legal and classification needs.
  • Negotiate the contract: Include mission-critical carve-outs, kill-switch procedures, and a tested continuity plan if a model becomes unavailable.
  • Train operators: Ensure your staff can use Claude safely and effectively, with scenario playbooks and policy refreshers.

Context and references

Upskill your operations team

If you're integrating Claude into sensitive workflows, focused training shortens the learning curve and reduces risk.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)