California Enacts SB 53, First-in-the-Nation Frontier AI Transparency and Safety Law
California enacts SB 53, the Transparency in Frontier AI Act, requiring transparency, incident reporting, and whistleblower protections. AG enforcement and annual updates follow.

California Enacts SB 53: Transparency in Frontier Artificial Intelligence Act
On September 29, 2025, Governor Gavin Newsom signed SB 53 into law, establishing a first-in-the-nation framework for safe, secure, and trustworthy frontier AI. The Transparency in Frontier Artificial Intelligence Act (TFAIA) sets clear expectations for developers while keeping California's innovation engine in motion.
The law builds on the state's expert-led report requested by the Governor earlier this year and fills the gap left by stalled federal action. It focuses on practical guardrails: transparency, incident reporting, whistleblower protections, and ongoing updates aligned with national and international standards.
Why this matters for policymakers and counsel
SB 53 formalizes a compliance baseline for "frontier" model developers and sets processes public agencies and contractors will intersect with-especially around safety incidents, transparency frameworks, and accountability. Expect ongoing iteration: the California Department of Technology will advise on updates each year to keep pace with technical change and international norms.
California's AI footprint
California remains the hub for AI development and investment. The state hosts 32 of the 50 top AI companies and led U.S. demand for AI talent in 2024, accounting for 15.7% of job postings-well ahead of Texas (8.8%) and New York (5.8%), per the 2025 Stanford AI Index.
More than half of global VC funding for AI and machine learning startups went to Bay Area companies in 2024. California is also home to three of the four companies that have crossed the $3 trillion valuation mark-Google, Apple, and Nvidia-each deeply invested in AI and responsible for hundreds of thousands of jobs.
Key provisions of SB 53
- Transparency: Large frontier model developers must publish a public framework describing how they incorporate national standards, international standards, and industry-consensus best practices into their frontier AI framework.
- Innovation (CalCompute): A new consortium within the Government Operations Agency will scope a framework for a public computing cluster to advance AI that is safe, ethical, equitable, and sustainable.
- Safety: Establishes a mechanism for companies and the public to report potential critical safety incidents to the California Office of Emergency Services.
- Accountability: Protects whistleblowers who disclose significant health and safety risks posed by frontier models and creates a civil penalty for noncompliance, enforceable by the Attorney General.
- Responsiveness: Directs the California Department of Technology to recommend annual updates based on multistakeholder input, technical developments, and international standards.
Statements from state leadership and experts
Governor Gavin Newsom: "California has proven that we can establish regulations to protect our communities while ensuring the AI industry continues to thrive. This legislation strikes that balance and builds public trust as this emerging technology evolves."
Senator Scott Wiener: "With a technology as transformative as AI, we have a responsibility to support innovation while putting in place commonsense guardrails to understand and reduce risk. California is stepping up as a global leader on both innovation and safety."
Expert panel (Mariano-Florentino CuΓ©llar, Dr. Fei-Fei Li, Jennifer Tour Chayes): "TFAIA advances transparency and 'trust but verify' principles from California's first-in-the-nation report. Policy should continue to emphasize scientific review to keep America at the forefront of technology."
What agencies, counsel, and contractors should do next
- Assess whether your organization qualifies as a frontier model developer or interfaces with one through procurement, grants, or partnerships.
- Draft a public transparency framework aligned to recognized standards and best practices. Consider mapping to the NIST AI Risk Management Framework and relevant ISO/IEC guidance.
- Stand up or refine safety incident intake, triage, and escalation procedures to interface with California OES where appropriate.
- Update whistleblower policies and reporting channels to reflect protections related to frontier model risks.
- Engage in multistakeholder processes led by the California Department of Technology; track annual recommendations and adjust internal controls accordingly.
- Monitor CalCompute's development for research collaboration, workforce development, and infrastructure access opportunities.
- Coordinate early with legal and compliance teams on recordkeeping, reporting thresholds, and AG enforcement exposure.
Enforcement and ongoing oversight
The Attorney General may enforce civil penalties for noncompliance. The California Department of Technology will propose annual updates to keep the law aligned with advances and international standards. OES will serve as a channel for reporting potential critical safety incidents. The Government Operations Agency will host CalCompute's work on public computing infrastructure.
The bottom line
SB 53 gives California a clear approach to frontier AI governance: publish how you align to standards, report serious risks, protect truth-tellers, and keep improving the guardrails. Expect continued iteration and opportunities to participate as the state refines guidance and stands up CalCompute. A signing message was issued alongside the bill.
Helpful resources
- Stanford AI Index
- NIST AI Risk Management Framework
- Training: AI courses by job (policy, legal, compliance)
This article is for informational purposes and does not constitute legal advice.