AI in Kinetic Operations: What Centcom's Use of Maven and Claude Signals for Ops Leaders
US Central Command says AI now helps screen and organize data for ongoing operations against Iran, enabling analysts to spend more time on verification and higher-level judgment. Reports cite more than 2,000 targets struck, including 1,000 within the first 24 hours. Sources told Bloomberg that Palantir's Maven Smart System sits at the core, ingesting 150+ data sources, with Anthropic's Claude reportedly integrated.
Capt. Timothy Hawkins emphasized that AI generates points of interest and structures information, but humans make decisions through a formal legal process. Advocacy groups warn about automation bias closing the gap between a recommendation and an action. Centcom is investigating reports that a strike on a girls' primary school killed more than 160 people; responsibility is unclear and it's unknown if AI was involved. Palantir and Anthropic declined or did not respond to requests for comment.
Why this matters for Operations
This is a live case study of AI as a decision-support layer in high-stakes environments. The pattern: a hardened data/ops platform (Maven) plus swappable models (e.g., Claude) feeding analysts faster, cleaner context while preserving human authority.
- Speed with control: AI narrows the search space; humans validate and approve.
- Modular stack: Keep models replaceable behind a common interface to avoid lock-in and maintain bargaining power.
- Provenance first: 150+ data sources means strict normalization, lineage, and access controls are non-negotiable.
- Traceability: Log prompts, model versions, features, and approvals to enable after-action reviews.
- Bias discipline: Counter automation bias with structured dissent, adversarial testing, and confidence thresholds.
- Incident response: Predefine pause/rollback criteria and investigative procedures when harm or error is suspected.
The Anthropic-Pentagon dispute: what ops teams should learn
Frictions rose when Anthropic held firm on two carve-outs: mass domestic surveillance of Americans and fully autonomous weapons. The Pentagon pushed for "all lawful purposes." After Anthropic didn't move, the administration labeled the company a "supply chain risk," and a reported $200M contract was canceled. OpenAI later announced an agreement allowing use of its models in classified systems, reportedly with "any lawful purpose" language, while also citing similar guardrails.
- Contract clarity: Spell out permissible uses and prohibited scenarios at a granular level. Words matter.
- Policy vs. tech: Align legal terms with enforceable controls (access policies, model gating, safety filters).
- Risk posture: Government deals can swing fast-factor political, legal, and reputational exposure into ROI.
- Multi-vendor resilience: Platform-first design makes provider swaps feasible without breaking operations.
How to structure AI-backed decision support in your operation
- Map decisions: Separate "propose" (AI) from "approve/act" (human). Make approval thresholds explicit.
- Evidence standards: Define what counts as sufficient signal, and require model rationale or feature attributions where possible.
- Confidence and escalation: Route low-confidence cases to senior review; auto-suppress weak signals.
- Red-team and rehearse: Attack your own system for failure modes (bias, spoofing, drift). Run tabletop drills.
- Audit trail by default: Immutable logs for data inputs, prompts, outputs, and human decisions.
- Incident playbook: Criteria to halt, investigate, and communicate; assign owners and timelines.
- Metrics that matter: Time-to-verify, false positives/negatives, analyst throughput, model drift, near-misses.
- Upskill your team: Train analysts in AI skepticism and structured verification, not blind acceptance.
Ethics and oversight are operational controls
The Stop Killer Robots coalition warns that decision-support systems can nudge operators toward over-trust. Treat that as a design bug to fix, not a PR issue. Bake counter-bias steps into workflows: mandatory second checks for high-impact actions, independent review channels, and clear dissent paths.
Centcom's ongoing investigation into reported civilian casualties is a sobering reminder: your governance is only as good as your worst day. Build systems that make it easy to stop, inspect, and correct-without delay or ambiguity.
Further reading and resources
Bottom line for Ops
AI can compress discovery time and surface targets, risks, or anomalies faster than human-only teams. The trade is governance complexity: contracts, controls, and culture must keep the machine pointed at the right problems-and keep people firmly in charge of the final call.
Your membership also unlocks: