White House AI Regulation Plan Faces Legal Hurdles as States Push Back on Federal Preemption and Funding Threats

The White House’s AI Action Plan seeks to limit state AI regulations, raising legal risks over funding and FCC preemption. States push back, citing varied safety and rights concerns.

Categorized in: AI News Legal
Published on: Jul 29, 2025
White House AI Regulation Plan Faces Legal Hurdles as States Push Back on Federal Preemption and Funding Threats

The White House’s Latest Moves on AI Regulation

The White House has introduced a new AI Action Plan aimed at limiting state-level AI regulations, raising the potential for legal battles. At the core are questions about how federal agencies claim preemption over state laws and which federal funds the administration may withhold from states. The plan directs the Federal Communications Commission (FCC) to assess whether state AI rules conflict with its responsibilities under the Communications Act of 1934.

Former President Donald Trump promoted the plan as a way to boost innovation and maintain U.S. leadership in artificial intelligence, especially against China. However, the plan’s threats to withhold funding and possible FCC preemption resemble a moratorium on state AI laws that Congress recently rejected after opposition from bipartisan governors, legislators, and attorneys general.

States want to regulate AI for varied reasons. Ellen P. Goodman, a law professor at Rutgers, points out that even states supportive of AI development have reasons to protect interests ranging from children’s safety to artists’ rights. For example, Tennessee’s ELVIS Act protects musicians from AI-generated voice deepfakes. Over the past three years, states have enacted numerous AI-specific laws addressing election misinformation, deepfakes, and algorithmic bias in hiring and lending. The tech industry has actively sought to block or narrow these regulations.

Trump’s plan states it will not interfere with states’ rights to regulate AI unless such regulations are “unduly restrictive to innovation,” but it provides little clarity on what qualifies as restrictive. According to Eric Null from the Center for Democracy and Technology, laws like Colorado’s sweeping algorithmic bias legislation likely would not pass a federal review. That law covers AI decision-making across employment, lending, education, health care, housing, and insurance, and requires audits and consumer notices. Colorado’s governor has urged lawmakers to revise or delay the law before it takes effect in 2026. It remains uncertain whether narrower laws like the ELVIS Act could face federal scrutiny.

Unclear Funding at Risk

Legal challenges against the White House’s plan will depend heavily on which “AI-related discretionary funding programs” the administration decides to withhold, following an evaluation led by the Office of Management and Budget (OMB). Mackenzie Arnold from the Institute for Law & AI explains that if the funding restrictions are limited to genuinely AI-specific programs, the impact will be narrow. However, if the definition expands to include broader funding streams such as broadband, the administration’s leverage over states significantly increases.

Cody Venzke of the American Civil Liberties Union notes that an overly broad interpretation of funding restrictions could strengthen states’ legal arguments. The federal government can condition spending but must clearly define those conditions. Unannounced or vague terms could raise constitutional issues. Venzke also expressed hope that the OMB will apply the safeguards it recommended in a recent memo on federal agency use of AI tools.

Goodman highlights that the White House’s plan also proposes funding opportunities for infrastructure critical to AI, such as data centers and power sources. Conditions attached to such funding would likely withstand legal challenges.

FCC Preemption Challenges

Preempting state AI regulations under the Communications Act presents a more difficult challenge. There is no clear precedent granting the FCC direct authority over AI. Scott Kohler from the Carnegie Endowment for International Peace explains that courts require explicit federal intent and a genuine conflict between federal and state policies to support preemption.

The FCC’s regulatory authority traditionally covers areas like landline telephones and broadcast media, not social media or AI applications. Venzke notes that the FCC’s jurisdiction does not extend to many AI-related areas. However, the FCC might argue for preemption in state laws regulating AI-generated election ads on broadcast TV or state restrictions on AI-related telecom infrastructure.

Goodman anticipates the FCC will at least initiate inquiries into possible conflicts, even if strong legal grounds are lacking. Kohler adds that defining which policy areas are sufficiently AI-related to justify preemption or funding restrictions will complicate enforcement. Many state efforts involve longstanding issues like elections, privacy, and consumer protection that intersect with AI but are not exclusive to it.

Implications for Legal Professionals

The evolving federal approach to AI regulation signals a complex legal environment for states and stakeholders. Legal professionals should monitor how federal agencies define “AI-related” funding and the scope of preemption claims. The balance between federal oversight and state regulatory autonomy remains unsettled, with significant constitutional questions on the horizon.

Staying informed about these developments will be crucial for advising clients in government, technology, and related sectors. For legal experts seeking to deepen their understanding of AI’s regulatory landscape, exploring specialized courses can provide valuable insights and practical skills.

Explore AI-focused legal training options at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)