Congress Tries to Bundle AI Preemption With Kids' Online Safety, States Push Back

Congress may tie AI preemption to child safety bills and the NDAA, while states resist. Legal teams should plan for shifting obligations, design changes, and litigation risk.

Categorized in: AI News Legal
Published on: Nov 27, 2025
Congress Tries to Bundle AI Preemption With Kids' Online Safety, States Push Back

Congress eyes AI preemption tied to child safety bills: what legal teams need to know

Congress is again considering federal preemption of state AI laws, this time attaching the effort to child online safety measures and possibly the National Defense Authorization Act. The move revives a recurring idea: wipe out a patchwork of state rules in favor of a federal approach that doesn't yet exist.

States are pushing back. NASCIO's executive director Doug Robinson urged congressional leaders to reject any blanket preemption, warning it would "strip states of the ability to address real AI risks in their communities and provide needed protection for children." In short: without comprehensive federal AI law, states want room to act.

The activity isn't theoretical. According to Enough Abuse, 45 states have enacted laws to address AI-generated or edited child sexual abuse material. The National Center for Missing and Exploited Children reported 485,000 AI-related CSAM tips in the first half of the year, up from 67,000 in all of 2024. Even many child-safety advocates aren't sold on federal preemption.

Public pressure is high. Michael Kleinman of the Future of Life Institute called House efforts "ghoulish" following a Nov. 19 hearing on chatbot harms. He flagged seven lawsuits against OpenAI tied to child deaths and the recall of the Kumma talking plush bear after a large language model delivered graphic, unsafe content.

How the bundle could advance

The House Energy and Commerce Committee weighed 19 child online safety bills, including the Kids Online Safety Act, which previously cleared the Senate with free speech concerns attached. Alex Whitaker of NASCIO said the plan to pair preemption with child protection is strategic, but doesn't fix the core risk: removing state agility. Even with federal laws, states could be blocked from responding quickly to new harms.

Preemption via executive action and NDAA

Recent drafts circulating from the White House would create an AI litigation task force to challenge state statutes and restrict funds for "onerous" state AI laws. A separate draft reported this week would direct DOJ, Commerce, FTC, and FCC to punish states with restrictive rules, while minimizing roles for technical bodies like NIST and OSTP. That omission matters; many companies structure AI governance around frameworks like the NIST AI Risk Management Framework.

Why past efforts stalled

Previous congressional preemption attempts met bipartisan resistance. Commerce officials claimed preemption was necessary for national security and leadership in AI, but lawmakers raised overreach and consumer harm. In May, more than 260 state legislators warned that preemption would gut protections against deepfake scams and algorithmic discrimination. In July, opposition led by Sen. Marsha Blackburn argued it would let Big Tech exploit kids, creators, and conservatives.

State AGs and policy groups push back

Thirty-six attorneys general urged Congress to preserve state authority, citing deepfakes, voice clones, and "sycophantic and delusional" generative outputs that can push people toward self-harm and violence. Their message: fast-moving technologies need agile responses, and states are the testing ground for what works.

Travis Hall of the Center for Democracy and Technology put it bluntly: the push is about limiting oversight and creating a vacuum of accountability. He also warned that the latest plan would sweep far beyond child safety into areas like workers' rights, anti-discrimination, and algorithmic pricing-eliminating state rules without solid federal replacements.

Implications for corporate counsel and compliance

  • Map exposure by state. Track AI-CSAM, deepfake, consumer protection, employment, and biometric statutes where you operate. Identify where your products or models intersect with high-risk categories.
  • Plan for both scenarios. If preemption passes, what state obligations disappear and what remains (e.g., federal UDAP, product liability, privacy, child protection statutes, consent decrees)? If it fails, which state obligations tighten in 2026?
  • Update contracts. Require vendors and model providers to meet the most stringent applicable state standard by default. Add audit rights, incident reporting SLAs, indemnities for unlawful AI outputs, and clear responsibility for trust-and-safety controls.
  • Reassess litigation risk. Expect claims across product liability, negligence, unfair practices, and wrongful death. Review arbitration clauses, class-action exposure, and forum selection. Preserve records that show testing, red-teaming, disclosures, and mitigations.
  • Child safety controls. If KOSA-like duties move, prepare for design changes: age gating, default safety settings, profiling limits, and responsive complaint handling. Align marketing and UX so promises match technical reality.
  • Governance now, not later. Stand up an AI risk program tied to recognized guidance like the NIST AI RMF. Document model provenance, evaluations, human oversight, and incident response. This helps with AG inquiries and discovery.
  • Monitor Congress. Watch the NDAA conference process and any manager's amendments for stealth preemption clauses. Track House Energy and Commerce markups and the final KOSA language for preemption and private right of action signals.
  • Engage states. Maintain open channels with AGs and state CIO offices. Consider comments, hearings, or industry codes of conduct to show good faith and reduce enforcement heat.
  • Insurance review. Validate coverage for AI-related harms, including mental health impacts, content moderation failures, IP claims, and data leakage. Stress-test notice and cooperation provisions.
  • Crisis playbooks. Build runbooks for AI content incidents (e.g., self-harm prompts, deepfake scams). Define takedown timelines, escalation paths, regulator notifications, and public statements.

What to watch next

Short term: NDAA negotiations, House action on child safety bills, and any executive order that targets state AI laws. Medium term: states will keep legislating; 38 have enacted AI laws this year alone. The National Conference of State Legislatures maintains a public tracker worth bookmarking: NCSL AI Legislation.

If you're standing up in-house training on AI risk, compliance, and policy, here's a practical catalog of courses by job function: Complete AI Training: Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide