Trump's AI Order Targets State Rules. What It Means For Government Work
President Donald Trump signed a new executive order on AI that puts state laws in the crosshairs. The White House says the goal is a "minimally burdensome" national policy that protects kids, prevents censorship, and respects copyrights-without tripping up innovation or investment.
The order arrives after a year where all 50 states introduced AI bills and 38 states adopted or enacted roughly 100 measures. The administration argues a growing patchwork makes compliance messy and slows U.S. momentum against China.
What the order does
- Directs federal agencies to favor a single national approach over conflicting state requirements.
- Creates an AI Litigation Task Force to challenge state laws that conflict with the administration's policy.
- Calls for a list of "onerous" state AI provisions and threatens to withhold federal broadband funding from states that keep them.
- Builds on prior actions to support AI developers, fast-track data center construction, and promote U.S. AI exports.
Why this matters if you work in government
- Preemption risk: State AI rules-including procurement clauses, disclosure mandates, or model testing requirements-could face federal challenges.
- Funding exposure: States that keep aggressive AI guardrails may see threats to federal broadband dollars tied to programs like BEAD (NTIA BEAD).
- Compliance complexity: Agencies may need to follow current state law while preparing for possible federal preemption or litigation.
- Procurement and contracts: Vendor obligations (risk disclosures, model audits, data use, safety testing) may need updates if preemption moves forward.
- Public trust: Residents will ask what protections remain for deepfakes, AI advice to minors, and use of public data if state rules are weakened.
Pushback from child-safety advocates
Common Sense Media CEO James Steyer called the move "an outrageous betrayal of the states" that have stepped in while Congress has stalled. He argued families "need every cop on the beat," not fewer state protections.
The group previously fought a provision in Trump's "One Big Beautiful Bill" that would have paused state AI enforcement for a decade; that language was removed, and they're now opposing the executive order. The organization has raised alarms about AI chatbots engaging teens who are struggling, citing cases shared in a Senate hearing where parents linked chatbot use to severe mental health spirals.
What experts are saying
Daniel Schiff of Purdue University said the order reads more as a push for innovation than a careful balance of risk and benefit. He sees value in state experimentation and doubts a single federal bill can cover the full scope of AI. His bottom line: the U.S. needs better coordination between federal and state levels, not a blanket squeeze on state action.
Anton Dahbura of Johns Hopkins echoed concerns about federal inaction: without a serious shift in how Washington approaches AI's effects-good and bad-state rules are carrying the weight.
Legal outlook and practical effects
The new AI Litigation Task Force could chill new state bills even if federal suits fall short in court. State officials may avoid fights, and the funding threat could nudge legislatures to pull back on AI oversight.
Expect arguments over constitutional grounds for preemption, definitions of "onerous," and whether the order overreaches by tying broadband dollars to AI policy compliance.
Immediate actions for agencies and state leaders
- Inventory current and pending AI-related statutes, executive directives, and procurement clauses; flag anything at risk of preemption.
- Coordinate early with your Attorney General, CIO, and legislative counsel on litigation exposure and fallback options.
- Review broadband grant dependencies and timelines; prepare contingency plans if funding becomes leverage.
- Update vendor requirements to preserve core protections (safety testing, incident reporting, data provenance) that can survive a preemption challenge.
- Strengthen youth-safety protocols around chatbots and mental health referrals; document safeguards and escalation paths.
- Maintain public transparency: publish model uses, data sources, and known limitations; open a channel for resident feedback and harm reports.
Context you should know
Congress did pass the "Take It Down Act," championed by First Lady Melania Trump, criminalizing nonconsensual sexual imagery, including AI-generated deepfake material. Beyond that, experts see little sign that broad AI guardrails will clear the current political gridlock.
States, meanwhile, continue to test approaches across disclosure, bias audits, safety testing for high-risk systems, and election integrity. For a running view, see the National Conference of State Legislatures tracker (NCSL: AI Legislation).
Open questions for 2025
- Which specific state provisions will land on the "onerous" list?
- How aggressively will the AI Litigation Task Force pursue preemption cases-and where?
- How will agencies define "protecting children," "preventing censorship," and "respecting copyrights" in enforcement practice?
- Will federal broadband funding be pulled-or will the threat serve mainly as pressure?
- Can states and the White House align on shared standards for high-risk systems without freezing innovation?
Upskilling your team
If you're developing AI policy, running procurements, or overseeing data programs, a baseline of AI literacy across staff helps. For role-specific learning paths, see curated course lists by job at Complete AI Training.
Your membership also unlocks: