White House releases AI legislation framework with liability protections for developers
The Trump administration released a legislative framework Friday that would establish federal AI rules while limiting legal liability for AI developers and restricting states' ability to regulate the technology independently.
The framework prioritizes child protections, intellectual property rights, and workforce development across seven main policy areas. It calls on Congress to require AI platforms verify user age, combat AI-enabled scams, and protect minors from sexual exploitation.
The administration argues that federal rules are necessary to prevent states from creating "a patchwork" of individual regulations that could slow development. The White House said in an announcement that Congress should establish "a consistent national policy that enables us to win the AI race."
Liability limits draw developer support, regulatory pushback
The framework proposes sharp restrictions on developer liability, particularly for harms caused by third parties using AI systems. It opposes what it calls "open-ended liability" that could trigger "excessive litigation" over child safety issues.
These provisions align with messaging from David Sacks, the White House's AI czar and venture capitalist, and Silicon Valley investors who argue that broad liability rules would discourage investment in American AI companies.
The proposal also seeks to limit states' ability to "penalize AI developers for a third party's unlawful conduct involving their models."
States and Republicans split on federal preemption
The framework's push to restrict state regulation has divided Republicans. More than 50 GOP lawmakers wrote to Trump in early March opposing "attempts to halt state AI legislation," saying the administration was trying to "prevent the passage of measures holding the tech industry accountable."
The letter responded to Trump administration pressure against a proposed Utah bill requiring AI companies to disclose how they protect children and limit catastrophic risks from their models, such as assisting in bioweapon creation or cyberattacks.
California's SB 53 and New York's RAISE Act currently set the standard for state AI legislation, requiring leading companies to establish whistleblower protections, report safety incidents, and disclose model testing practices.
Framework includes anti-censorship provisions
The proposal calls on Congress to prevent the federal government from coercing AI providers to "ban, compel, or alter content based on partisan or ideological agendas."
This language follows Trump's recent decision to cut off Anthropic, a leading AI company, from government contracts for being "woke." Anthropic is now suing the federal government, claiming the cancellation violated its First Amendment rights.
Data center electricity concerns addressed
The framework calls on Congress to ensure that residential electricity rates do not rise as a result of new data center construction. Data center expansion has become a bipartisan issue in state capitals as officials weigh economic benefits against infrastructure strain.
For professionals working in regulatory and compliance roles, understanding these proposed federal standards is essential. AI for Legal Professionals provides guidance on navigating AI regulation and liability frameworks as policy develops.
Your membership also unlocks: