Federal push to curb state AI rules: what the Dec 11, 2025 EO does
On December 11, 2025, President Trump signed an Executive Order (EO) that aims to limit states' ability to regulate artificial intelligence. The stated goal: a uniform national policy that favors speed and innovation over state-by-state rules. Practically, it sets up a federal posture to contest state AI laws and tie certain funding to policy alignment with the EO; see AI for Government for guidance on public-sector AI governance and policy implications.
Reduced regulation does not mean reduced risk. AI for Legal: legal exposure remains, and most of it flows through long-standing frameworks you already know.
What the EO claims and directs
The EO argues that a 50-state patchwork stifles innovation, that some anti-discrimination rules would embed ideological bias in models, and that certain state laws unlawfully regulate beyond their borders under the Commerce Clause. It then directs aggressive federal action to pressure or preempt state regimes.
- Department of Justice: Create an AI Litigation Task Force within 30 days to challenge conflicting state laws.
- Department of Commerce: Within 90 days, publish an assessment of conflicting state AI laws and issue a Policy Notice making states with "onerous AI laws" ineligible for remaining federal grants.
- Grant leverage: All agencies must review discretionary grants and assess whether to condition awards based on consistency with the EO.
- BEAD linkage: AI policy is tied to eligibility under the Broadband Equity, Access, and Deployment (BEAD) Program. See program details from NTIA here.
- FCC: Open a proceeding within 90 days on a federal reporting/disclosure standard for AI models that could preempt state rules.
- FTC: Issue, within 90 days, a policy statement addressing when state laws are preempted because they require changes to truthful AI outputs. Statutory background: FTC Act.
- Legislative push: Presidential advisors must prepare legislative recommendations for a uniform federal framework.
The EO singles out Colorado's upcoming "algorithmic discrimination" law (effective June 2026) as an example it views as problematic, arguing it could pressure models into producing "false results."
Carve-outs: The EO states it will not preempt certain areas, including child safety, AI data center infrastructure, state government use of AI, and other topics as later determined.
What it does not do
Executive Orders do not preempt state law on their own. State and local AI statutes and ordinances remain in force unless and until validly preempted by federal law or struck down in court.
Expect litigation. States will likely challenge agency actions on Tenth Amendment grounds, argue Spending Clause coercion if grants are conditioned too aggressively, and test the boundaries of FCC and FTC authority. Congress is divided on federal preemption: in December 2025, lawmakers removed an NDAA provision that would have blocked state AI enforcement.
Less regulation ≠ less exposure
AI-related disputes will not show up labeled as "AI claims." They will arrive through familiar legal channels. Plaintiffs and regulators already have tools to act.
- Product liability (state law): Perception or decision flaws in autonomous systems cause injury.
- UDAP (state): Pricing or sales tools target vulnerable consumers or use manipulative tactics.
- FTC Act, Section 5 (deception): Misstatements about data collection, training use, or sharing.
- Antitrust (exclusionary conduct): Ranking/recommendation algorithms that self-preference and suppress rivals.
- Antitrust (algorithmic discrimination): Systems that systematically disadvantage certain businesses or entrants.
- Privacy and data protection (state/federal): Processing personal data without proper notice, consent, or lawful purpose.
- Biometric privacy: Facial or voice data used without statutorily required consent.
- Data security and breach: Weak controls over training or inference data leading to leaks or harmful outputs.
- IP infringement: Training on protected works that yields substantially similar content.
- Trade secrets: Training on confidential data obtained from an employee or partner without authorization.
What counsel should do now
- Track agency clocks: 30- and 90-day milestones for DOJ, Commerce, FCC, and FTC. Prepare to comment on FCC/FTC proceedings.
- Inventory AI use: Map systems, models, data sources, and use cases across products, operations, HR, and marketing.
- Contract updates: Tighten data rights, model behavior warranties, audit rights, SLAs, and indemnities. Flow down vendor obligations.
- Testing and documentation: Pre-deployment testing, adverse impact analysis, validation plans, red-teaming scope. Keep records.
- Privacy controls: Notices, consent/opt-out where required, data minimization, retention limits, and DPIAs. Special handling for biometric and children's data.
- Product safety: Human oversight, fallback modes, monitoring for harmful failure modes, incident response, and recall/escalation criteria.
- Security: Access controls for training/inference data, secrets management, prompt/response logging, and leakage testing.
- IP risk: Training data provenance checks, content filters, and review for substantial similarity and style cloning issues.
- Governance: Risk register, model cards/fact sheets, change control, and board reporting. Assign clear ownership.
- State watchlist: Track Colorado and other state/city rules. Adjust compliance profiles by jurisdiction.
- Grants and funding: If you rely on federal programs (including BEAD), model scenarios where grant conditions shift. Build contingency plans.
- Litigation readiness: Preserve prompts, outputs, datasets, and decision logs. Set retention and legal hold protocols now.
Key dates at a glance
- 30 days: DOJ AI Litigation Task Force.
- 90 days: Commerce state-law evaluation and Policy Notice; FCC proceeding; FTC policy statement.
- June 2026: Colorado algorithmic discrimination law takes effect (cited by the EO).
Bottom line: even if federal policy squeezes state AI rules, enforcement risk remains high. Ground your program in existing law, document your choices, and be ready to move as agencies test the limits of preemption.
If your team needs practical AI upskilling, explore role-based programs such as AI for Executives & Strategy.
Your membership also unlocks: