Utah Gov. Cox urges states to set AI rules before Congress preempts them

Don't wait on Washington: set state AI rules now before preemption narrows your options. Use risk tiers, election safeguards, and agency policies that plug into future standards.

Categorized in: AI News General Government
Published on: Dec 03, 2025
Utah Gov. Cox urges states to set AI rules before Congress preempts them

"States must act": Why state AI rules can't wait for Washington

Utah's governor is pushing a simple message: don't wait for Congress to tell you what you can't do. If federal preemption comes, it could wipe out state AI laws or freeze your options. That makes now the moment for state leaders to set clear, workable guardrails.

For people in government roles, the path is practical: protect residents, provide clarity to agencies and vendors, and keep room for innovation. Do it in a way that can plug into future federal standards without starting over.

What this push is about

  • Guardrails before preemption: Build baseline protections now so your state isn't locked out later.
  • Election integrity: Address AI-generated deepfakes, deceptive content, and rapid response coordination before the next cycle.
  • Public-sector use: Set rules for how agencies adopt AI-privacy, procurement, and accountability.
  • Pro-innovation clarity: Create predictable expectations for startups and vendors so they can build with confidence.

Immediate actions for governors, legislators, CIOs, and AGs

  • Stand up a cross-agency AI task force: Include CIO, CISO, AG's office, elections, education, health, labor, and procurement. Time-box the first recommendations to 60 days.
  • Adopt a risk framework: Use tiered risk levels (low, moderate, high) with increasing requirements for testing, oversight, and disclosure. Anchor to the NIST AI Risk Management Framework for interoperability with federal efforts. NIST AI RMF
  • Issue a statewide AI use policy for agencies: Define approved vs. prohibited use cases, require human review for high-impact decisions, ban uploading PII or confidential data to public models, and set recordkeeping standards.
  • Update procurement templates: Add data handling, security, model change logs, incident reporting, performance metrics, bias testing, and human-in-the-loop requirements for high-risk systems. Require vendors to disclose model lineage and known limits.
  • Transparency and labeling: Require clear labeling when the public interacts with AI and disclosures on AI-generated content used by agencies. Encourage content provenance and watermarking where feasible.
  • Election safeguards: Define deceptive deepfake practices, require disclaimers on synthetic political content, and set rapid response protocols with platforms. Train county clerks and communications teams.
  • Privacy and data minimization: Ensure AI use aligns with existing state privacy laws. Limit data retention and set clear deletion timelines.
  • Education and workforce: Provide role-based training for public servants-policy, legal, IT, procurement, and communications. Consider curated learning paths such as AI courses by job to upskill teams fast.
  • Sandbox and pilots: Allow limited pilots under oversight with risk controls, audit plans, and community feedback before statewide rollout.
  • Incident reporting and audits: Require agencies and vendors to report material failures, bias findings, or security incidents within set timeframes. Schedule independent audits for high-risk systems.
  • Small business considerations: Offer simplified compliance paths and technical assistance so local innovators aren't crowded out.

Policy elements that work across administrations

  • Clear definitions: Avoid vague language. Define "high-risk," "automated decision system," "AI-generated content," and "material impact."
  • Rights and recourse: Provide notice, explanation, and an appeal route for people affected by automated decisions.
  • Documentation: Require impact assessments and testing plans for high-risk systems before deployment and at major model updates.
  • Records and retention: Keep prompts, outputs, and decision logs for auditability while respecting privacy limits.

What to avoid

  • Overbroad bans that block low-risk uses like drafting emails or summarizing public documents.
  • One-size-fits-all mandates that treat a chatbot the same as an unemployment benefits adjudication engine.
  • Vague compliance language that makes enforcement subjective and deters responsible vendors.
  • Hidden obligations that effectively require source code disclosure where that's unnecessary to assess risk.

30-60-90 day roadmap

  • Day 0-30: Form the task force, publish interim principles, freeze high-risk deployments without oversight, and update procurement checklists.
  • Day 31-60: Release the statewide agency use policy, risk tiers, and a standard impact assessment template. Launch 2-3 supervised pilots.
  • Day 61-90: Introduce legislation covering disclosures, election integrity, rights of appeal, and audits. Set training requirements and publish an AI system registry for state use.

How to measure progress

  • Number of agency pilots with completed impact assessments
  • Time to procure compliant AI solutions (baseline vs. after policy)
  • Incidents reported and resolved within policy timelines
  • Public-facing interactions with clear AI labeling
  • Election-related takedown response times

Bottom line

States have a window to set practical AI rules that protect people and give clear signals to the market. Wait too long and preemption could set the ceiling for you. Act now, anchor to recognized risk frameworks, and leave room to adjust as federal guidance evolves.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide