President Trump's New AI Executive Order: What Employers and Business Leaders Should Know
Published: December 19, 2025
On December 11, 2025, President Trump signed an Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence." It's the clearest federal move yet to push back on state-by-state AI rules, especially in business and employment.
For executives, treat this as direction, not a rule change. Your current obligations stay the same, and your compliance risk hasn't decreased.
What the Executive Order Does
- Directs the Department of Justice to form an AI Litigation Task Force to challenge state AI laws seen as burdensome to innovation, conflicting with federal policy, or affecting interstate commerce.
- Instructs the Department of Commerce to identify and publicly evaluate "onerous" state AI laws, with emphasis on rules that pressure systems to alter what the administration deems truthful outputs or impose questionable disclosure mandates.
- Authorizes agencies to consider conditioning certain discretionary federal funds on a state's AI regulatory posture.
- Directs the FTC and FCC to explore federal approaches to AI disclosures and consumer protection standards.
- Calls for federal legislation to create a uniform national AI framework, while preserving state authority in areas like child safety, state procurement, and data center infrastructure.
Bottom line: a federal preference for fewer state-specific mandates and more consistent federal standards.
What the Executive Order Does Not Do
- It does not repeal, suspend, or invalidate existing state or local AI laws.
- It does not create a comprehensive federal AI compliance regime.
- It does not automatically preempt state regulation (that requires Congress or the courts).
- It does not shield employers from enforcement or private litigation related to AI use.
State and local AI requirements, including those tied to employment decisions, remain fully enforceable today.
What This Means for State AI Regulations
Expect more uncertainty before clarity. States that passed AI-specific laws plan to defend them, and more bills are coming.
Federal action will run through agency work and litigation, which takes time and will face constitutional and federalism challenges. Even if some AI-specific laws are narrowed or delayed, states will keep using familiar tools: consumer protection, employment discrimination statutes, privacy and biometric rules, and unfair competition laws.
Why Best Practices Matter Now
Across agencies and courts, the themes are consistent: accountability, transparency, documentation, and meaningful human oversight. Proposed federal bills like the No Robot Bosses Act mirror ideas already in state laws and frameworks like the EU AI Act.
Global momentum reinforces this direction. See the EU's AI Act text for context on expected standards: EU AI Act. For consumer protection and disclosure thinking in the U.S., review the FTC's AI guidance: FTC on AI.
Waiting for one unified U.S. framework is no longer a strategy. The practices that regulators expect are becoming clearer and more consistent-early movers will manage risk better and adjust faster.
Executive Action Plan: What to Do Now
- Map your AI footprint: Inventory where AI influences recruiting, hiring, promotions, performance, scheduling, and monitoring. Include vendor-supplied tools.
- Set oversight for high-impact uses: Require human review for employment-impacting AI. Document decision flows and create clear escalation paths to question or override outputs.
- Audit your claims: Align internal and external statements about fairness, accuracy, and bias mitigation with how the tools actually work-and what your evidence supports.
- Tighten vendor contracts: Bake in testing and validation, documentation delivery, audit rights, incident response, and allocation of responsibility for compliance and litigation exposure.
- Stand up cross-functional governance: Legal, HR, security, data science, and operations should meet on a set cadence with authority to approve high-risk use cases.
- Track developments without pause: Monitor state and federal moves, but don't wait. Ship governance improvements in parallel with regulatory change.
90-Day Focus
- Weeks 1-2: Complete AI inventory and classify use cases by impact and risk.
- Weeks 3-6: Implement human-in-the-loop controls and documentation for top-risk employment uses.
- Weeks 7-10: Update vendor agreements and RFP templates with AI-specific terms.
- Weeks 11-12: Publish internal AI policy, train managers, and stand up an intake process for new tools.
The Bottom Line
This Executive Order does not give businesses a compliance off-ramp. It marks a shift toward federal influence while state rules continue to apply.
AI governance is now a core employment law and enterprise risk issue. Leaders who act on inventory, oversight, documentation, and vendor diligence will be better positioned for scrutiny, litigation, and the next wave of rules.
If your leadership team needs practical upskilling on AI risk, governance, and adoption, explore role-based programs here: Complete AI Training: Courses by Job.
Your membership also unlocks: