Deregulation vs. Defense: What Trump's AI Action Plan Means for Security and Trust

White House to revise AI risk frameworks, easing procurement rules but adding vendor uncertainty. Experts urge NIST-first controls, deepfake playbooks, and coordination with DHS.

Published on: Sep 22, 2025
Deregulation vs. Defense: What Trump's AI Action Plan Means for Security and Trust

What's Behind the New U.S. AI Action Plan-and What It Means for Leaders

The administration is pushing agencies to revise federal AI risk management frameworks with an eye toward faster innovation. That move could loosen requirements tied to government procurement and shift how vendors prove AI safety and security.

Deepfake and AI fraud expert Joshua McKenty, former Chief Cloud Architect at NASA and now CEO of Polyguard, sees mixed signals: speed on paper, execution risk in practice.

The core move: deregulation via framework revisions

The Department of Commerce has been directed to update AI risk frameworks. In practice, this may weaken safeguards that were expected for doing business with the federal government, creating new ambiguity for vendors and agencies.

For leaders, the immediate task is to map current controls to the existing NIST AI Risk Management Framework and prepare for changes. Don't wait for final text to start impact assessments.

Security push: a DHS-led AI ISAC

McKenty on the proposed DHS "AI Information Sharing and Analysis Center": "The US is dangerously behind in their response to emerging AI-powered cybersecurity attacks, as evidenced by the recent mishandling of deepfake attacks on Marco Rubio, Rick Crawford and others."

He adds a caution: "It's encouraging to see the White House finally take AI threats seriously - but urgency without coordination risks compounding the problem. The challenge ahead isn't just standing up new programs, it's making sure they actually work."

AI-specific cybersecurity: build on what already works

McKenty's guidance is direct: align with the existing playbooks from the FBI, CISA, NSA, and the DoD Cyber Crime Center. "What's needed is clever coordination and actionable intelligence."

  • Stand up formal channels with federal partners for threat intel and incident escalation.
  • Add AI threat scenarios to red teaming, from model abuse to content forgery and deepfakes.
  • Operationalize content provenance (signing, detection, takedown workflows) in comms, legal, and security.

Practical implications for executives and public-sector leads

  • Procurement and compliance: Track Commerce/NIST revisions closely. Keep current controls aligned to the NIST AI RMF even if new rules relax requirements.
  • Risk governance: Implement AI-specific threat modeling, model inventories, and evaluation gates before deployment.
  • Security operations: Prepare playbooks for deepfake incidents targeting executives, elections, or markets. Dry-run response with legal and PR.
  • Vendor strategy: Reassess supplier risk. Require attestations on model testing, data lineage, and misuse safeguards.
  • Measurement: Define a small set of leading indicators (abuse rates, detection time, model safety benchmarks) and review monthly.

Workforce gap: plan for skills at scale

"The U.S. faces a growing talent gap in AI. While demand for skilled professionals is accelerating, our pipeline of trained engineers, researchers, and cybersecurity experts isn't keeping pace," says McKenty. He calls for long-term STEM investment, immigration pathways for top talent, and tighter industry-academic collaboration.

  • Stand up a rolling upskilling plan for security, data, and engineering teams (quarterly refresh, clear skill paths).
  • Cross-train security analysts on model behavior, prompt-injection, data poisoning, and detection engineering for synthetic media.
  • Use targeted credentials to validate progress and hiring. Consider curated options for role-based learning: AI courses by job.

Risk management: keep the science, keep the guardrails

McKenty on the NIST approach: "NIST's framework is one of the few widely respected tools for managing AI risk. Revisions should focus on technical clarity, threat modelling, operational usability, and science - not politics. Stripping out key areas that address misinformation or emergent behaviour would make the framework less relevant just as the stakes are getting higher."

  • Continue using the NIST AI RMF as your baseline and document deviations.
  • Maintain controls for misinformation, content authenticity, and emergent behavior regardless of federal shifts.
  • Participate in public comment periods to push for clarity on testing, incident reporting, and supply chain requirements.

What to watch next

  • Drafts from Commerce/NIST on framework revisions and timelines.
  • DHS details on the AI ISAC: scope, membership, sharing protocols.
  • OMB memos tying AI controls to federal procurement.
  • Agency-specific implementations (Commerce, DHS, Defense) and how they treat misinformation and emergent behavior risks.
  • Updated agency guidance from CISA on secure AI system development: CISA AI resources.

The bottom line: expect faster policy shifts with uneven implementation. Keep your controls anchored to science and testing, not headlines. Move fast on coordination, measurement, and skills so your programs work under any rule set.