Let States Lead on AI Regulation, Not Congress Alone
A provision in the House budget bill would block state AI regulations for a decade, bypassing democratic debate. States must retain the ability to govern AI alongside federal oversight.

Don’t Silence the States on AI
Hidden within the recent House budget bill is a provision that would block any new state or local laws regulating artificial intelligence for the next decade. This effectively creates a moratorium on public oversight without any hearings, debate, or a clear federal alternative. This approach bypasses democratic norms and disrupts the balance of federalism.
There is wisdom in restraint when regulating emerging technology—regulation can sometimes stifle innovation and push breakthroughs offshore. But AI is different. It moves fast, touches nearly every part of the economy, and demands careful governance. A blanket ban on state regulation for ten years is not the answer. Even if the Senate narrows this moratorium, it should reject any effort to bar states from acting and instead commit to a proper legislative process with hearings and input from all stakeholders.
Regulation must come through democratic processes, not hidden in budget riders. Our federal system has long managed the tension between state and federal authority, especially regarding interstate commerce. Federal preemption is appropriate in limited, specific cases—not as a broad stroke that wipes out every state law connected to AI, from civil rights to consumer protections.
Justice Louis Brandeis once described states as “laboratories of democracy,” places where policies can be tested and improved before going national. This principle is especially relevant now. AI is still unfolding in how it affects work, markets, and daily life. We need state-level experimentation, not a federal freeze.
What Congress Should Do Instead
Rather than silence states, lawmakers should focus on three key steps:
- Update Sector-Specific Laws: Committees should revise laws in banking, healthcare, agriculture, education, and other sectors to account for AI’s impact. Each area has unique risks and opportunities. The committees closest to these markets know the issues and are best placed to act precisely and swiftly.
- Mandate Transparency and Disclosure: Congress should require clear and enforceable transparency standards for AI systems that pose significant risk. This isn’t about pre-approval or gatekeeping but about codifying obligations for disclosure, recordkeeping, and model behavior. Transparency must be enforceable and independently auditable.
- Prevent Catastrophic Risks: Lawmakers must ensure AI is not used to create major threats like autonomous weapons, engineered biological hazards, or tools for financial manipulation. While some existing laws may apply, a thorough review is necessary to avoid dangerous legal gray zones.
Additionally, Congress should respect general-purpose state laws—those covering fraud, discrimination, negligence, product liability, and unfair business practices. These laws have long been the frontline for consumer protection. AI does not operate in a legal vacuum. If AI is used to commit fraud or manipulate markets, victims must have access to justice in state courts.
A broad federal preemption risks shifting ordinary tort claims into federal courts, which are far less equipped to handle the volume. State courts manage around 100 million cases annually with roughly 30,000 judges. Federal courts handle under half a million cases with about 1,700 judges. Overloading federal dockets with AI-related disputes would paralyze the system.
Past regulatory efforts in securities fraud and digital advertising drew careful jurisdictional lines between federal and state oversight. Congress can do the same for AI, maintaining consistency without stripping states of their authority.
AI is a national issue, but the solution is not exclusively federal. The right path involves clear federal guardrails in high-risk areas, updates to sector-specific laws, and state flexibility to enforce basic rights and responsibilities. Let Congress and the states each do their part.
Ultimately, AI is as much a test of governance as it is a technological challenge. Legislating wisely requires time and process, not shortcuts. A decade-long gag order on states would be a failure of responsibility and imagination.