States will keep pushing AI laws despite Trump's efforts to stop them
States aren't waiting around. Even with national voices pushing to slow or preempt local AI rules, lawmakers and agencies see immediate pressure from residents, workers, and vendors to set guardrails that fit local priorities.
For government professionals, the message is simple: assume more state-level AI action, expect legal friction, and build processes that work under uncertainty.
Why states aren't waiting
State leaders are responding to practical, near-term risks and complaints they hear every day. They also tend to move faster than federal processes and can test ideas before Congress acts.
- Residents want clarity on data use, consent, and AI disclosures.
- Agencies need rules for procurement, risk, and accountability.
- Employers and schools are deploying AI now; guardrails can't lag a year behind.
- Local politics reward visible action on privacy, bias, and fraud prevention.
What state AI bills are doing right now
States are taking varied, targeted approaches instead of one giant framework. Expect overlapping themes with different thresholds and definitions.
- Disclosure: Requirements to tell people when they're interacting with AI, especially in customer service or government services (Utah enacted disclosure rules in 2024).
- Hiring and credit tools: Testing, validation, or impact assessments for automated decision systems that affect jobs, housing, healthcare, or benefits (inspired by measures like New York City's local law and Illinois' AI Video Interview Act).
- Deepfakes and impersonation: Civil or criminal penalties for election-related deepfakes and unauthorized voice likeness uses (e.g., Tennessee's ELVIS Act).
- High-risk system duties: Risk management, documentation, and incident reporting for systems that materially affect rights or critical services (Colorado passed a broad law in 2024).
- Face recognition and surveillance: Limits, audits, or warrants for certain uses, with carve-outs for public safety or emergencies.
- Vendor accountability: Contract terms that require transparency, data protections, and cooperation with audits.
Federal pushback and preemption pressure
At the federal level, proposals and political pressure have aimed to curb a patchwork of state rules. Former President Donald Trump and allies have signaled support for limiting state measures they see as burdensome or speech-related overreach.
Preemption could come through new federal statutes, agency rules, or litigation arguing conflicts with federal law or the Constitution. Until that's settled, states are likely to keep moving, and courts will sort out where lines get drawn.
The patchwork problem (and how to operate in it)
Different states will use different definitions, risk tiers, and enforcement tools. Build for variance instead of betting on uniformity.
- Adopt a baseline that meets the strictest common requirements you face (disclosure, risk assessments, human review for high-stakes outcomes).
- Centralize AI system inventories, owners, and risk classifications across agencies.
- Use contracts to standardize vendor obligations: testing, logs, model change notices, and audit rights.
- Create a rapid update path so policy can track new statutes without a full rewrite.
Legal fault lines to watch
- Federal preemption: If Congress sets national rules, some state provisions may be displaced.
- First Amendment: Deepfake labels and content rules must be carefully scoped to survive strict scrutiny.
- Due process and fairness: Individuals affected by automated decisions need notice, explanations, and a path to human review.
- Commerce Clause: State rules that burden interstate vendors may draw challenges if they reach beyond state borders.
- Procurement law: Overly narrow specs can limit competition; too loose, and you inherit risk you can't manage.
Practical playbook for public-sector leaders
- Inventory: List every AI-enabled system in use or in procurement. Note purpose, data sources, and decision impact.
- Risk tiers: Classify systems by impact on rights, access to services, and safety. Map controls to each tier.
- Disclosures: Provide clear, plain-language notices when residents interact with AI or when AI informs decisions.
- Human-in-the-loop: Require human review for high-stakes outcomes (benefits, eligibility, enforcement).
- Impact assessments: Document risks, testing results, bias checks, and mitigation steps before deployment.
- Vendor terms: Mandate evaluation datasets, performance metrics, explainability artifacts, security controls, and incident reporting.
- Records and logs: Keep decision logs and model version histories for audits and appeals.
- Resident recourse: Offer simple appeal mechanisms with timelines and contact points.
- Training: Upskill staff on AI basics, risk, and procurement. Refresh annually as laws change.
- Coordination: Sync with the Attorney General, CIO, privacy officer, and civil rights units to avoid gaps.
What's next
Expect more state bills targeting deepfakes, critical infrastructure, and high-risk decision systems. Enforcement will shift from guidance to action as Attorneys General and city agencies issue fines and consent orders.
Vendors will respond with standardized disclosures, testing reports, and audit portals. Agencies that set clear expectations now will negotiate better contracts and reduce downstream friction.
Helpful resources
- NIST AI Risk Management Framework - a practical structure for risk tiers, controls, and documentation.
- FTC guidance on AI claims - useful for advertising, disclosures, and avoiding unfair or deceptive practices.
- Complete AI Training - courses by job to help public-sector teams level up on AI procurement, governance, and audits.
Bottom line: States will keep moving because the risks - and the public's expectations - are local and immediate. Build durable processes now so your team can comply, adapt, and keep services reliable as the rules evolve.
Your membership also unlocks: