White House Holds Back on AI Standard as States Win Bipartisan Backing

Washington punted on AI rule specifics as Congress wrestles over federal vs. state lanes. For now, expect a patchwork, carved-out areas, and NIST to steer a tighter standards track.

Published on: Jan 16, 2026
White House Holds Back on AI Standard as States Win Bipartisan Backing

White House holds back on national AI framework specifics as Congress seeks clear "federal vs. state" lanes

In a House hearing this week, the administration's AI lead, Michael Kratsios, offered few details on legislative recommendations for a national AI standard that could preempt state laws. The move follows a December executive order directing federal agencies to challenge "onerous" state AI laws and limit certain federal funds to states that adopt them.

The politics are messy. There's bipartisan resistance to sweeping preemption, even as industry asks for one rulebook. For executives and R&D leaders, the takeaway is simple: plan for a patchwork near-term, while a federal framework remains uncertain. Executives can start with AI for Executives & Strategy to align governance and strategy.

What the administration signaled

Kratsios emphasized the need for "regulatory clarity and certainty," and said the administration sees "opportunities for collaboration" with Congress. He stopped short of specifics on what a federal AI standard would cover or when recommendations will land.

The executive order carves out exceptions for state laws on child safety, data center infrastructure, and state procurement of AI. That hints at where state authority may remain, even under preemption.

Congress' mood: a federal lane and a state lane

Subcommittee Chair Jay Obernolte voiced support for a federal framework that keeps the U.S. competitive, while affirming a real role for states. He pointed to California's laws requiring AI developers to report catastrophic model risks and disclose training data-requirements that could multiply if more states follow.

Obernolte pressed for clear "guardrails" defining what belongs under interstate commerce (federal only) versus where states act as "laboratories of democracy." Kratsios acknowledged multi-state compliance burdens for startups but offered no timeline on a preemptive standard.

Pushback: constitutional concerns and risk gaps

Rep. Zoe Lofgren challenged the executive order's attempt to shift power from Congress and the states to the executive branch, calling it unconstitutional. She backed the administration's AI Action Plan goals-innovation, infrastructure, diplomacy, and security-while arguing it underplays risk, including deepfakes.

Deepfakes and platform accountability

Lofgren raised concerns about X (formerly Twitter) after its Grok chatbot generated sexualized images of real people, including minors. The Senate passed legislation by voice vote to allow victims to sue platforms over nonconsensual AI-generated intimate imagery.

Kratsios said misuse of technology "requires accountability," not blanket bans. Expect more legal exposure for platforms and toolmakers around image synthesis, content provenance, and age safety.

Standards, institutions, and budgets

Lawmakers pressed for clarity on the National Institute of Standards and Technology and its Center for AI Standards and Innovation (CAISI), which replaced the former U.S. AI Safety Institute. The administration directed NIST to revise its AI Risk Management Framework to remove references to misinformation, DEI, and climate change, positioning NIST to focus on core metrology and technical standards.

Obernolte indicated plans to introduce the Great American AI Act to codify CAISI and praised continued support for the National Artificial Intelligence Research Resource (NAIRR). Rep. Haley Stevens criticized proposed cuts to NIST and warned of impacts on cybersecurity, privacy, and advanced manufacturing. The president's budget sought a $325 million reduction; a Senate appropriations package would reject it.

For technical leaders, the signal is clear: NIST will remain central, but the scope of its guidance may narrow, and funding dynamics are still in play. CIOs and IT leaders should review the AI Learning Path for CIOs to align strategy and governance with evolving standards.

What executives and R&D leaders should do now

  • Plan for dual compliance: keep a living map of state AI requirements (risk reporting, training data disclosures, safety testing) and be ready to adjust if partial federal preemption arrives.
  • Build a "federal-ready" core: standardize model reporting, evals, incident response, and audit trails so you can pivot quickly to a single national standard if it's enacted.
  • Prioritize child safety and content provenance: invest in age safety controls, watermarking/provenance, and red-team processes for synthetic media. Legal exposure is rising.
  • Engage early with NIST frameworks: align controls to the AI RMF and track upcoming edits so you're not surprised by scope changes. See NIST's AI RMF for current guidance: NIST AI Risk Management Framework.
  • Budget for validation: allocate resources for independent testing, red teaming, and third-party assurance-especially if you sell into regulated states or federal procurement.
  • Vendor due diligence: require model providers to disclose eval methods, training data provenance, and catastrophic risk assessments. Bake these into contracts.
  • Data center and infrastructure: if operating facilities, track state-level requirements that may persist under any federal standard's carve-outs.
  • Track timelines, not headlines: preemption is politically contested. Build roadmaps that work under a persistent patchwork, with optionality for a federal pivot.

The bottom line

The administration wants a national standard, but Congress isn't ready to sideline the states. Until there's a bill with clear "lanes," expect overlapping rules, targeted carve-outs, and selective enforcement.

If you lead AI strategy, assume state-led requirements will persist through at least the next planning cycle. Build a compliance core once, map it to state demands, and keep a light lift ready for whatever federal standard finally shows up.

Looking to upskill your team on AI governance, policy, and risk? Regulatory and compliance specialists can start with the AI Learning Path for Regulatory Affairs Specialists.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)