White House proposes national AI framework that would override state laws and limit company liability

The White House released a four-page AI policy framework that would override state AI laws and shield companies from liability for product harms. House Republican leaders quickly backed it, calling on Congress to act.

Categorized in: AI News General Government
Published on: Mar 24, 2026
White House proposes national AI framework that would override state laws and limit company liability

White House Pushes National AI Policy to Override State Laws

The White House on Friday released a framework for national artificial intelligence policy that would preempt state AI regulations and shield AI companies from broad liability for harms caused by their products.

The four-page proposal fulfills a December executive order from President Donald Trump directing his science and technology advisers to develop a national policy that overrides state laws. It urges Congress to protect children online, manage energy costs from data centers, and address copyright concerns while creating exemptions from federal rules for AI developers.

House Republican leaders immediately backed the framework. Speaker Mike Johnson and Majority Leader Steve Scalise, along with committee chairs overseeing energy, judiciary, and science matters, called for Congress to "take action" to "beat China in the global AI race."

What the Framework Covers

The proposal addresses seven categories: child safety, community effects, copyright, government censorship, federal regulation, jobs, and state preemption.

Child safety: The framework calls for parental controls over privacy, content, and screen time rather than age verification systems. It asks Congress to require AI platforms to implement features reducing risks of sexual exploitation and self-harm to minors, but offers no specifics on what those features should look like. It does not expand data privacy protections for children beyond existing rules under the Children's Online Privacy Protection Act.

Energy and data centers: Congress should ensure residential ratepayers don't face increased energy costs from data center expansion while streamlining permitting for those facilities. The framework also calls for stronger law enforcement efforts against AI scams.

Copyright: The framework largely defers to ongoing court cases over whether AI training constitutes fair use. It suggests Congress consider creating a system for collective licensing between rights holders and AI providers but stops short of requiring it.

Deepfakes: Congress should protect people from unauthorized deepfakes in commercial settings while carving out exceptions for satire, news reporting, and other First Amendment-protected speech.

Government pressure on content: The framework repeats Republican plans to prevent what they call "indirect government censorship" - pressure from officials on tech companies to remove or change content - and would allow individuals to sue if censored.

Regulatory flexibility: AI companies could apply for exemptions from federal regulations for up to 10 years through regulatory sandboxes. Congress should make federal datasets more accessible for AI training and should not create a new AI oversight agency.

Jobs: Congress should use non-regulatory methods to integrate AI training into education and workforce programs. A February poll found 63 percent of Americans believe AI will reduce jobs.

Industry Support, Advocacy Pushback

Tech industry groups praised the framework for its light-touch approach. Patrick Hedger, director of policy for NetChoice, said the proposal shows the White House understands what is needed to advance AI innovation.

Daniel Castro of the Center for Data Innovation said the framework avoids "alarmism" about job losses and copyright concerns.

Advocacy groups warning of AI risks objected. Brad Carson, president of Americans for Responsible Innovation, said the framework would give "tech companies another chance to launch harmful products with no accountability."

The proposal contradicts elements of legislation from Sen. Marsha Blackburn, a Tennessee Republican who released her own AI bill this week. Blackburn's measure would impose a "duty of care" on AI developers to prevent harms to users - a requirement the White House framework explicitly warns against, saying such standards could trigger "excessive litigation."

State Preemption as Central Goal

Preempting state AI laws has been a long-standing priority for the AI industry and Trump administration. Congress failed to include it in the GOP budget reconciliation bill last summer and did not officially add it to the annual defense policy bill.

The framework proposes broad federal preemption while allowing states to maintain generally applicable laws, zoning rules for data centers, and state procurement authority. States would be prohibited from regulating AI development or penalizing AI developers for how third parties use their products.

For government employees working on AI policy and regulation, understanding this framework is essential. It signals the administration's priorities for how federal rules will govern AI development and where state authority will be limited. Learn more about AI for Government and Generative AI and LLM to stay current on policy developments affecting your work.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)