White House releases AI policy framework targeting state laws, child safety, and copyright
The White House released a seven-chapter legislative framework on March 22, 2026, calling on Congress to establish a single national standard for AI development and to preempt state-level AI regulations. The document sets specific policy objectives across child safety, intellectual property, free speech, workforce development, and federal authority over AI governance.
The framework carries no legal force. It is a set of recommendations from the executive branch to Congress, signaling how the Trump Administration intends to approach AI regulation. The breadth of the recommendations marks a shift away from the fragmented, agency-by-agency enforcement model used previously.
Federal preemption of state AI laws
The most consequential recommendation calls on Congress to override state AI regulations that impose "undue burdens" on innovation. The document states: "States should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications."
The administration has signaled this position before. In early 2026, the White House directed the Commerce Department to identify "onerous" state AI laws within 90 days-a deadline that fell roughly concurrent with this framework release.
The document carves out explicit exceptions. States retain authority to enforce general-purpose consumer protection laws, child protection statutes, fraud prohibitions, and zoning rules for AI infrastructure. States also keep authority over their own internal use of AI in procurement and law enforcement. What they lose is the ability to impose development-side restrictions on AI model training or deployment beyond what federal law permits.
For companies operating across multiple states, the practical implication is significant. A patchwork of 50 different state compliance regimes-each with different disclosure requirements, liability standards, and definitions for AI-generated content-creates operational risk. The framework explicitly aims for "one standard, not fifty discordant ones."
Child safety and targeted advertising
The first chapter addresses child safety with detailed recommendations directly applicable to digital advertising practices. Congress should "affirm that existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising," the document says.
The framework calls for age-assurance requirements-described as "commercially reasonable" and "privacy protective"-for AI platforms likely to be accessed by minors. Parental attestation is cited as an example mechanism. Platforms would also implement features to reduce risks of "sexual exploitation and self-harm to minors."
This builds on enforcement momentum from the FTC and state attorneys general. In August 2025, 44 state attorneys general issued a formal warning to major AI companies including Meta, Google, and OpenAI demanding child protections. The White House framework would anchor these concerns in federal statute rather than relying solely on FTC enforcement or state action.
The revised Children's Online Privacy Protection Act, which took effect June 23, 2025, already requires separate parental consent for third-party data sharing and expanded definitions of child-directed services. Civil penalties reach $43,792 per violation. The framework signals Congress may go further, specifically tying AI model training and targeted advertising to those existing protections.
Copyright: courts decide, Congress waits
The framework's position on copyright is notably restrained. According to the document: "Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue."
Congress is explicitly told not to take actions that would influence pending or future litigation on whether AI training on copyrighted content constitutes fair use. This constrains Congress despite the Copyright Office publishing three major reports on AI and copyright, and Congress receiving multiple legislative proposals including the TRAIN Act.
The framework does propose one mechanism: enabling collective licensing frameworks that would allow rights holders to negotiate compensation from AI providers without triggering antitrust liability. However, any such legislation "should not address when or whether such licensing is required"-meaning Congress would create the mechanism but not mandate its use.
The framework also calls for federal law protecting individuals against unauthorized commercial use of AI-generated replicas of their voice, likeness, or other identifiable attributes. Explicit carve-outs apply for parody, satire, news reporting, and First Amendment-protected expression. This directly affects advertising-voice and likeness replication in AI-generated ad creative has become a live issue as generative tools mature.
Infrastructure and small business support
The framework's section on "Safeguarding and Strengthening American Communities" addresses physical AI infrastructure-data centers, power grids, and permitting. Congress should ensure that "residential ratepayers do not experience increased electricity costs as a result of new AI data center construction and operation," the document says.
Simultaneously, Congress is asked to streamline federal permitting for AI infrastructure construction, allowing developers to generate power on-site or behind the meter to accelerate buildout and "enhance grid reliability."
For small businesses, the framework calls for grants, tax incentives, and technical assistance programs to support wider AI deployment. This mirrors growing concern about the gap between large technology companies with extensive internal AI resources and smaller advertisers relying on platform tools and third-party solutions.
The framework also directs Congress to augment law enforcement efforts to combat AI-enabled impersonation scams targeting vulnerable populations such as seniors. For marketers, this intersects with brand safety-AI-generated impersonation content erodes consumer trust in digital channels.
Free speech and platform content policy
The framework's chapter on censorship uses pointed language. Congress should "prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas," the document says.
It further calls for an effective means for Americans to seek redress from federal agencies for efforts to "censor expression on AI platforms or dictate the information provided by an AI platform."
This reflects administration concerns about government influence over platform moderation decisions. For the advertising ecosystem, these provisions could affect content policies governing what AI systems can do in ad creative generation-an active area of concern as programmatic advertising integrates generative AI across campaign creation and optimization workflows.
Workforce development and regulatory approach
The framework establishes a workforce development agenda. Congress should use "non-regulatory methods to ensure that existing education programs and workforce training and support programs, including apprenticeships, affirmatively incorporate AI training," the document says.
Land-grant institutions are specifically called out as vehicles for technical assistance, demonstration projects, and AI youth development programs. This creates potential pathways for regional capability-building outside major technology hubs.
On regulation, the framework explicitly directs Congress not to create "any new federal rulemaking body to regulate AI." Instead, AI governance should flow through existing sector-specific regulators. AI in healthcare continues under FDA oversight, AI in financial services under SEC or CFTC jurisdiction, and AI in advertising under FTC authority.
Congress is also asked to make federal datasets available "in AI-ready formats" for use in training AI models, giving industry and academia access to government data holdings currently difficult to use for machine learning due to format and access constraints.
What legal professionals need to know
For in-house counsel and legal teams, the framework creates several immediate considerations. The preemption provisions would, if enacted, consolidate compliance requirements for AI-powered tools into a single federal standard. Companies currently facing potentially divergent obligations under state privacy and AI laws would operate under a federal floor with targeted exceptions for consumer protection, fraud, and child safety.
The child privacy provisions extend existing COPPA frameworks into AI systems, with explicit mention of "targeted advertising" as a data use subject to limits when minors are involved. This builds on FTC enforcement momentum already reshaping how advertising platforms handle data from users under 18.
The copyright provisions affect publishers and content creators whose material may be used in AI training datasets. The administration's position-that training on copyrighted content does not violate copyright law-aligns with AI developer arguments, but the collective licensing mechanism proposed could create new commercial structures requiring negotiation and documentation.
The impersonation provisions create a federal legal basis for advertising restrictions on synthetic representations of real individuals, with carve-outs for expression protected by the First Amendment. This requires clear policies on voice and likeness use in AI-generated advertising content.
None of these recommendations are law. They require congressional action. But the administration's willingness to engage at this level of specificity-seven chapters, dozens of discrete recommendations, explicit positions on contested legal questions-signals that AI governance will be a significant legislative priority in the coming congressional session.
For legal teams, tracking the specific legislative language as bills emerge will be essential. The gap between these recommendations and final statute may be substantial, and the details will determine compliance obligations.
Learn more about AI for Legal professionals and AI for Marketing to understand how these policy developments affect your organization's AI use cases.
Your membership also unlocks: