Executive Overreach or Innovation Savior? Trump's December Surprise Takes Aim at State AI Laws
President Trump signed an Executive Order (EO) titled "Ensuring a National Policy Framework for Artificial Intelligence," aimed at sidelining state AI laws in favor of a "minimally burdensome" federal approach. His message was blunt: "There must be only One Rulebook," citing the risk of 50 states "involved in RULES and the APPROVAL PROCESS."
The order is aggressive. It mobilizes the Department of Justice to attack state AI regulations, pressures states with federal funding, and leans on the FTC and FCC to preempt state standards. It also sets up a federal legislative push to formally preempt state law.
What's actually in the EO
Section 3 - EO AI Litigation Task Force. The Attorney General must create a task force whose "sole responsibility" is to challenge state AI laws that don't align with the EO's policy. Translation: DOJ becomes an active litigant against state AI policymaking.
Section 4 - Federal identification of "onerous" state laws. Commerce must publish, within 90 days, an evaluation of state AI laws that allegedly force "alterations to truthful outputs," compel unconstitutional disclosures, or otherwise conflict with the EO's policy. The list will spotlight targets, but these state laws remain enforceable unless and until a court says otherwise.
Section 5 - Broadband funding as leverage. The Commerce Secretary must condition remaining BEAD program funds on states agreeing not to enforce "onerous" AI laws while receiving funding. Agencies are also told to consider similar conditions for discretionary grants. This is the Administration's big stick, despite a near-unanimous Senate rejection (99-1) of a similar approach earlier this year.
Sections 6-7 - Agency preemption efforts. The FCC must consider federal AI reporting and disclosure standards that preempt conflicting state rules. The FTC must issue a policy statement on when state rules requiring "alterations to truthful outputs" are preempted by the FTC Act's deception authority. Both agencies' preemption authority over state consumer protection law is limited, which sets up a fight.
Section 8 - Legislative recommendation. The White House AI leads must propose federal legislation preempting state AI laws, with carve-outs for areas like child safety, AI compute infrastructure (apart from general permitting), and state procurement and use of AI. This concedes what matters most: Congress, not an EO, is needed to truly preempt.
Timeline the EO sets in motion
- DOJ AI Litigation Task Force established → 30 days
- Commerce evaluation of state laws → 90 days
- BEAD funding Policy Notice → 90 days
- FTC policy statement on "truthful outputs" → 90 days
- FCC preemption proceeding → 90 days after Commerce evaluation
Why now: the "Stargate" context
The Administration has tied its AI agenda to "Project Stargate" - a proposed $500 billion infrastructure partnership with OpenAI, SoftBank, and Oracle. The aim: deploy capital fast, build data centers, and train models at scale. In that frame, state safety audits and bias assessments look like friction.
There's skepticism. Elon Musk questioned whether the money exists, saying "SoftBank has well under $10B secured." Whether the capital materializes or not, this EO functions as a regulatory battering ram to clear state-level hurdles.
Why the EO is likely headed for a courtroom buzzsaw
1) Congress already balked
A federal moratorium on state AI laws passed the House but collapsed in the Senate under the Byrd Rule and bipartisan resistance. Sen. Josh Hawley called that approach "constitutional kryptonite." Colorado Rep. Brianna Titone put it plainly: "Congress enacts laws ... an executive order is not law." If Congress wouldn't pass it, convincing courts to bless a unilateral EO is an uphill climb.
2) The Tenth Amendment and anti-commandeering
Consumer protection, employment practices, and insurance are classic state police powers. The EO tells DOJ and the FTC to go after those laws anyway. Without clear congressional authorization, that's shaky. In Gonzalez v. Oregon, the Supreme Court rejected a DOJ attempt to override state policy absent explicit statutory authority. There is no federal "AI Act" here delegating a preemptive power over state AI statutes.
3) The "truthful outputs" theory is strained
The EO nudges the FTC to say states can't require "alterations" to "truthful" AI outputs. That assumes a premise courts may not accept: that outputs are "truthful" in a way tied to FTC deception jurisprudence. AI outputs are probabilistic and context-dependent. Expanding "deception" to block state bias rules looks like doctrinal contortion.
4) Spending Clause limits
Conditioning funds to coerce policy change faces two guardrails. Under South Dakota v. Dole, conditions must relate to the program's purpose. Under NFIB v. Sebelius, threatening the loss of large, independent funding streams can be unconstitutionally coercive. Using BEAD dollars to force states to suspend AI bias or safety laws may be mismatched to the program and coercive in effect.
Critics also argue the EO leans on an overly broad view of the Commerce Clause, one that courts have not endorsed, and which faces bipartisan skepticism. Expect immediate litigation from states and affected stakeholders.
What this means for legal teams
Do not stand down on state compliance. California's TFAIA, the Chatbot Law, Colorado's AI anti-discrimination rules, and Utah's provisions remain enforceable until a court says otherwise. Several new statutes, including in Texas, are scheduled to take effect on January 1, 2026.
- Anticipate a wave of lawsuits: states suing to enjoin the EO; DOJ challenging state laws; private parties seeking clarity.
- Expect patchwork plus uncertainty: federal pronouncements won't preempt by magic. Courts decide.
- Budget for dual compliance: federal policy statements won't shield you from state AGs or private actions.
Action checklist
- Inventory exposure: map products, models, and deployments to specific state obligations (California, Colorado, Utah, soon Texas).
- Preserve optionality: design controls that can satisfy multiple regimes (disclosure, testing, bias impact assessment) without committing to a single interpretation.
- Update risk memos: add EO-driven litigation risk, grant-condition risk, and potential injunction scenarios.
- Strengthen documentation: keep model cards, evaluation reports, and audit trails current; they are your first line of defense with state AGs.
- Monitor agency moves: track the FTC "truthful outputs" statement and any FCC disclosure docket; comment where your interests are at stake.
- Prepare the board: brief on the likelihood of conflicting directives and the plan to prioritize binding court orders over policy statements.
The bottom line
The Administration is signaling that state rules are in the way of an "AI-first" national strategy. States will argue they are doing what they've always done: act as laboratories of democracy. Courts will be asked to decide where the line is.
Until a judge draws that line, the state patchwork stands - and the EO adds another layer of conflict. Proceed as if your state obligations remain fully enforceable. The cost of guessing wrong could be measured in eight figures.
"It is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country." - Justice Louis D. Brandeis
If your in-house team is rolling out AI training to support policy compliance and risk management, these curated programs can help: AI courses by job.
Your membership also unlocks: