California's SB 53 Shows State AI Safety Rules Won't Stall Innovation

California's SB 53 sets a safety floor for high-risk AI while keeping innovation going. It targets catastrophic misuse, mandates transparency and compliance, putting OES in charge.

Categorized in: AI News Legal
Published on: Oct 06, 2025
California's SB 53 Shows State AI Safety Rules Won't Stall Innovation

California's SB 53: A practical framework for AI safety that doesn't block progress

California's SB 53, signed into law this week, sets a baseline for AI safety without stopping innovation. Adam Billen, vice president of public policy at Encode AI, argues the law shows that state-level rules can protect both public safety and research velocity.

SB 53 targets "catastrophic risk" scenarios - models being used to attack critical infrastructure or assist bio-weapons - and requires major AI developers to be transparent about their safety protocols and to actually follow them. Enforcement sits with the Office of Emergency Services (OES).

What SB 53 requires

  • Public transparency around safety and security protocols for high-risk AI systems.
  • Evidence that labs are taking steps to prevent catastrophic misuse, including testing and documentation (e.g., model cards).
  • Ongoing adherence to the stated protocols, enforceable by OES.

Billen's take is straightforward: "Companies are already doing the stuff that we ask them to do in this bill… Are they starting to skimp in some areas at some companies? Yes. And that's why bills like this are important."

The competitive pressure problem

Some labs have signaled they may "adjust" safety requirements if a rival ships a more capable system without comparable safeguards. Billen's argument is that law can lock in the safety floor companies already claim to observe, reducing the incentive to cut corners under competitive or financial pressure.

Why the push for federal preemption is growing

Opposition to SB 53 was quieter than the campaign against last year's SB 1047, but the broader narrative from parts of Silicon Valley remains that most regulation slows progress and weakens the U.S. against China. That's fueling efforts to preempt state laws.

Recent moves include a proposed moratorium on state AI regulation and new federal strategies that would override state authority. Billen expects continued attempts at preemption - including "sandbox" concepts and a narrow federal standard framed as a compromise, but with sweeping preemptive effect.

His warning: narrowly scoped federal AI legislation could "delete federalism for the most important technology of our time."

Federal policy crosscurrents: chips, exports, and incentives

If competing with China is the priority, Billen points to export controls and chip access as the lever that matters. Proposals like a Chip Security Act and the existing CHIPS and Science Act address those issues, but industry alignment is uneven. Companies with significant China revenue exposure have raised competitiveness and security concerns, and policy signals have been inconsistent, including a partial reversal that allowed some chip sales to China under revenue-sharing terms.

Action checklist for in-house counsel and compliance leaders

  • Scope and applicability: Determine whether your AI development activities fit SB 53's "large lab" focus. If you rely on third-party frontier models, assess whether your vendors fall under SB 53 and map dependencies.
  • Protocols and controls: Document your catastrophic-risk protocols, testing plans, and red-teaming approach. Align public disclosures (e.g., model cards) with internal practices. Build change-control so updates don't drift from stated commitments.
  • Governance: Assign ownership for safety compliance, define board-level reporting, and ensure incentives don't permit relaxing standards due to market pressure.
  • Contracts: Embed SB 53-aligned commitments into vendor and customer agreements. Include audit and attestation mechanisms, incident notice, and termination rights for safety noncompliance.
  • Incident readiness: Integrate OES-facing obligations into incident response. Maintain logs, test playbooks, and define escalation paths for suspected catastrophic-risk misuse.
  • Multi-state posture: Track adjacent state rules on deepfakes, transparency, algorithmic bias, children's safety, and public-sector AI use. Harmonize disclosures to avoid conflicts.
  • Preemption watch: Monitor federal efforts that could override state law. Flag "sandbox" or narrow "standards" bills with broad preemptive language for policy and litigation risk.

Why this matters for legal teams

SB 53 codifies practices many labs already claim to follow, making compliance attainable while curbing backsliding under pressure. For counsel, the immediate work is operational: align disclosures with reality, document controls, and wire compliance into governance and contracts.

The bigger risk is volatility: if federal preemption advances, you may face a fast reset of obligations and enforcement venues. Build adaptable compliance programs and keep a live map of state and federal developments.

Key quotes to brief stakeholders

  • "There is a way to pass legislation that genuinely does protect innovation … while making sure that these products are safe."
  • "Bills like this are important" because some firms "skimp in some areas."
  • On beating China: focus on export controls and chip access, not killing state safety bills.

Resources

Stay current

For teams upskilling on AI risk, audits, and policy, see role-based training options: AI courses by job.

Published: October 1