Trump orders agencies to drop Anthropic as fight over military access to AI escalates

Trump ordered federal agencies to drop Anthropic within six months over security concerns. Freeze new use, audit tools and data, and line up replacements to avoid service hits.

Published on: Mar 01, 2026
Trump orders agencies to drop Anthropic as fight over military access to AI escalates

Trump Orders Federal Agencies to Stop Using Anthropic: What Government Customer Support Teams Need to Do Now

US President Donald Trump said he is directing every federal agency to immediately stop using technology from Anthropic. The decision follows a public standoff between the company and the administration over access to AI tools.

Defense Secretary Pete Hegseth said he is designating Anthropic a "supply chain risk," which would block any company doing business with the military from any commercial activity with Anthropic. Anthropic said it will challenge any such designation in court and rejects uses tied to "mass surveillance" and "fully autonomous weapons."

The White House plans a six-month phase-out across government, according to the President. He also warned Anthropic to cooperate with the transition or face the "Full Power of the Presidency," including potential civil and criminal consequences. Anthropic said it has not received direct notice on the status of talks, and reiterated it would support customers in transitioning if required.

What changed this week

  • The President ordered a government-wide halt on Anthropic tools, with a six-month wind-down.
  • The Defense Secretary said he will label Anthropic a "supply chain risk," blocking military contractors from any commercial activity with the company.
  • Hegseth threatened to invoke the Defense Production Act (DPA) if Anthropic refused broad access to its AI systems for "any lawful use."
  • Anthropic refused uses tied to domestic mass surveillance and fully autonomous offensive weapons, and said it would fight any risk label in court.
  • OpenAI's Sam Altman signaled similar "red lines" for product use, while confirming a separate deal to run OpenAI models on classified cloud networks for the department the President has referred to as the "Department of War."
  • Anthropic has been used by federal agencies, including for classified work, since 2024, under a Pentagon contract valued at $200 million. A recent private valuation reportedly put the company near $380 billion.

What this means for agencies, contractors, and support desks

If your agency or your contractors use Anthropic tools (e.g., Claude) in production or pilot programs, expect a structured removal over six months. The biggest workload will fall on IT, procurement, security, and frontline support to keep services stable while switching providers.

  • Freeze new Anthropic deployments and extensions right now.
  • Inventory every Anthropic integration, workflow, and dataset. Flag anything tied to mission-critical services or citizen-facing channels.
  • Map dependencies: SSO, data pipelines, logging, prompts, plugins, and any downstream automations.
  • Start vendor comparisons and security reviews for replacements that meet agency requirements.
  • Coordinate with contracting officers on funding, options, and allowable interim steps.

Immediate actions (next 14 days)

  • Appoint an executive lead and a cross-functional "AI off-ramp" team (IT, SecOps, Legal, Privacy, Procurement, Comms, Support).
  • Issue a freeze memo and a change-control process for any AI tooling tied to Anthropic.
  • Stand up a dedicated support queue for "AI tool changes" so staff know where to send issues.
  • Publish a short internal FAQ and help-desk script for frontline agents (see "Messaging" below).
  • Request data export plans and timelines from Anthropic and any third-party platforms that embed Anthropic models.
  • Back up prompts, knowledge bases, fine-tunes, logs, and audit trails; verify chain-of-custody for sensitive data.
  • Identify and test 1-2 replacement providers for each high-impact use case; run parallel pilots to reduce cutover risk.
  • Kick off security and privacy reviews for alternatives; confirm data residency, retention, and model-training policies.
  • Update ATO boundaries and interconnection agreements if a new provider touches FISMA-scoped systems.
  • Document risk acceptance or compensating controls if full cutover won't complete inside six months.

Working with contractors and vendors

  • Request written attestations from integrators and SaaS vendors on whether they use Anthropic anywhere in your scope.
  • Insert flow-down clauses requiring contractors to cease Anthropic use for your work and to notify you of any embedded dependencies.
  • For dual-use vendors serving DoD, confirm their compliance plan given the "supply chain risk" statement.
  • Require a de-risking timeline, data migration plan, and staff training plan from each vendor.

Procurement and legal notes

A formal "supply chain risk" designation would have broad implications for defense contractors and any agency work that touches DoD systems. Track official notices and acquisition guidance closely. If the Defense Production Act is invoked, it can compel certain kinds of industrial cooperation and priority performance. Review the statute and prepare for scenario planning with counsel.

Messaging guidance for customer support teams

  • What changed: "Per a presidential directive, our agency is phasing out Anthropic AI tools over the next six months."
  • Continuity: "We expect services to remain available. If issues arise, we'll provide updates and alternatives."
  • Data handling: "Your data protections remain in place. We are exporting and securing information before switching providers."
  • Contractors: "Vendors working on our behalf are required to follow the same rules and timelines."
  • Escalation: "If you notice changes in AI-assisted features or response quality, please open a ticket under 'AI tool changes.'"

Key risks to manage

  • Service degradation during model switchovers or policy throttling.
  • Lost prompts, context windows, or embeddings that break workflows.
  • Data retention or deletion gaps during export and shutdown.
  • Shadow AI use by staff trying to fill gaps with unvetted tools.
  • Contract disputes over scope, SLAs, or termination clauses.

Mitigations that work

  • Run side-by-side tests of replacement models on your top 20 tasks; record success rates and latency.
  • Pre-approve interim tools with clear guardrails; block unapproved providers at the network level.
  • Create "golden prompts" and evaluation sets to keep quality steady across model swaps.
  • Schedule staged cutovers after business hours with rollback plans and on-call coverage.
  • Hold weekly stakeholder reviews until all Anthropic dependencies are retired.

What to watch next

  • Whether the formal "supply chain risk" designation is issued and how it's scoped.
  • Any DPA action and its practical effects on AI vendors.
  • Court filings from Anthropic challenging the designation.
  • OMB, DHS, and DoD guidance for federal AI procurement and data handling during the phase-out.
  • Clarifications on which commercial entities are barred from Anthropic if they work with the military.

Bottom line

Treat this as a managed decommission. Freeze new use, inventory everything, secure your data, and pilot replacements now. With a clear plan and tight comms, you can hit the six-month window without disrupting services to the public.

Further resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)