Concerns Raised Over AI Regulation Clause in House Budget Bill
Over 100 organizations have expressed serious concerns about a provision embedded in the House's expansive tax and spending cuts package that could significantly limit the regulation of artificial intelligence (AI) systems. The provision would block states from enforcing any laws or regulations related to AI models, systems, or automated decision systems for a decade.
As AI technology increasingly influences areas like personal communication, healthcare, hiring, and law enforcement, preventing states from applying their own regulatory measures could pose risks to users and society at large. The concerned groups outlined these risks in a letter addressed to key Congressional leaders, including House Speaker Mike Johnson and House Democratic Leader Hakeem Jeffries.
Key Risks Highlighted by Organizations
The letter warns that this moratorium on state-level AI regulation could allow companies to deploy harmful algorithms without accountability, regardless of intent or the severity of consequences. This lack of oversight could leave the public and lawmakers powerless to address misconduct related to AI technologies.
The bill has passed an important stage after advancing through the House Budget Committee, but it still requires further approval from the full House before moving to the Senate. Signatories of the letter include universities such as the University of Essex and Georgetown Law’s Center on Privacy and Technology, alongside advocacy groups like the Southern Poverty Law Center and the Economic Policy Institute.
Labor groups representing tech workers, including Amazon Employees for Climate Justice and the Alphabet Workers Union, also signed the letter, reflecting widespread concern within the tech community regarding the future of AI regulation.
Industry and Political Context
Emily Peterson-Cassin, director of corporate power at Demand Progress, which drafted the letter, criticized the provision as a “dangerous giveaway to Big Tech CEOs,” arguing that it would prematurely embed unregulated AI into everyday life. She urged Congressional leaders to prioritize public interest over corporate donations.
This regulatory push comes amid President Donald Trump’s rollback of previous federal AI safeguards. Early this year, Trump revoked a Biden-era executive order that established some AI protections and moved to lift export restrictions on critical US AI chips. Maintaining US leadership in AI against international competitors like China remains a top priority for the administration.
Vice President JD Vance has voiced concerns about excessive regulation stifling AI innovation, emphasizing the need to avoid killing a transformative industry as it grows.
States Taking Independent Action
In the absence of comprehensive federal AI laws, several states have enacted their own rules targeting high-risk AI applications. Colorado’s law, passed last year, requires companies to guard against algorithmic discrimination and mandates transparency when users interact with AI systems.
New Jersey recently criminalized the distribution of misleading AI-generated deepfake content, and Ohio is considering legislation to mandate watermarks on AI-generated media and prevent identity fraud via deepfakes. Several other states have passed laws regulating AI-generated deepfakes in elections.
Bipartisan Agreement on Specific AI Issues
Regulating certain AI applications has seen bipartisan support. For example, the upcoming Take It Down Act—set to be signed into law—will criminalize sharing non-consensual AI-generated explicit images. This law passed both chambers of Congress with wide support.
Interestingly, the House budget bill’s preemption provision contradicts calls from some tech leaders for clear AI regulations. OpenAI CEO Sam Altman testified before a Senate subcommittee in 2023, emphasizing that government regulatory intervention is vital to mitigate the risks posed by advanced AI models.
Altman has advocated for a risk-based regulatory framework that offers legal clarity to companies, helping them comply with consistent rules rather than a patchwork of state laws. He stressed the need for clear “rules of the road” to ensure responsible AI deployment while enabling innovation.
Conclusion
The debate over AI regulation is intensifying as federal lawmakers consider provisions that could restrict states’ abilities to enforce AI laws for years to come. Legal professionals and policymakers will need to weigh the balance between fostering AI innovation and protecting the public from potential harms.
For those in the legal field aiming to stay informed about AI technologies and their regulatory landscape, exploring specialized AI courses can provide valuable insight. More information is available through resources like Complete AI Training.
Your membership also unlocks: