California's Transparency in Frontier AI Act: Practical Takeaways for Legal Teams
Published: 10 Oct 2025 - San Francisco, United States
California is the first US state to pass a law aimed at the largest "frontier" AI models. The Transparency in Frontier Artificial Intelligence Act centers on disclosures, not direct controls. Experts see it as a modest step that sets expectations without creating hard liability or strong enforcement.
What the law requires
- Public reporting on how developers applied national and international safety frameworks and best practices during model development.
- Mandatory incident reporting for major harms attributable to AI models, including large-scale cyberattacks, deaths of 50 or more people, and significant monetary losses.
- Whistleblower protections for those raising safety concerns.
As Annika Schoene (Northeastern University) noted, the statute is disclosure-heavy and light on enforceability, especially given limited public-sector expertise on frontier AI. Robert Trager (Oxford) countered that disclosure is a pragmatic start while evaluation science matures.
Scope gaps and liability exposure
The law applies only to the largest models. Smaller but high-risk systems-such as AI companions or tools used in policing, immigration, or therapy-fall outside its scope. Laura Caroli (CSIS) found its reporting duties mirror voluntary commitments from last year's Seoul summit, narrowing its practical impact and concentrating obligations on a few firms.
The limits are visible in consumer harm scenarios. A San Francisco lawsuit alleges prolonged chat-based interactions contributed to a teenager's death by suicide. Under the new California statute, a developer discloses governance measures but is not liable for crimes or harms committed by the model itself. That liability gap is a core policy fault line.
Innovation vs. accountability
California pared back earlier proposals (like kill switches and third-party evaluations) after concerns about chilling innovation. Dean Ball (former White House OSTP adviser) called the final law reasonable but warned of escalating risks from cyber and bio misuse. Trager suggested public disclosures could still furnish a record that supports litigation in cases of misuse.
How it compares with the EU
Gerard De Graaf (EU) underscored that the EU AI Act fuses transparency with obligations across large and high-risk models. California's approach is narrower. For counsel operating across jurisdictions, expect divergent duties and potential conflicts between US state-level disclosure models and EU compliance obligations.
European Commission: AI Act overview
Federal and state outlook
Other states (for example, Colorado) are moving on AI rules that take effect next year. At the federal level, momentum is slower. A recent bill by Senator Ted Cruz would let companies apply for waivers from regulations they argue constrain growth. If patchwork continues, multi-jurisdiction AI governance will become a standing compliance project for in-house teams.
Startup carve-outs and infrastructure
To avoid burdening early-stage builders, California limited the law to the largest models and backed a public cloud cluster to support startups. Steve Larson (former state official) described the statute as a "practice law"-a signal that state oversight has arrived and will harden over time.
Practical steps for legal, risk, and compliance teams
- Map exposure: Identify where your organization builds, buys, or integrates large models vs. high-risk smaller models (companions, investigative tools, therapy-like features).
- Update contracts: Add disclosure, incident-reporting, audit, and termination rights for AI vendors. Require immediate notice of safety events and model changes that affect risk.
- Prepare incident protocols: Define triggers that match the statute's reportable events and your own lower internal thresholds. Align legal, security, and comms on response.
- Stand up a protected channel: Ensure whistleblower routes cover AI safety and are accessible to employees and critical contractors.
- Document frameworks: Maintain a clear record of which national/international safety frameworks and best practices are applied across the AI lifecycle.
- Anticipate discovery: Treat disclosures as potentially discoverable. Build a review process for public statements, risk assessments, and safety reports.
- Track state/federal movement: Monitor California rulemaking, Colorado's rollout, and any federal preemption or waiver mechanisms that could shift obligations.
- Cross-border planning: For EU exposure, prepare for risk-tiering and obligations that go beyond disclosure, including data, testing, and post-market duties.
What to watch next
- Whether California supplements disclosures with enforceable safety or testing requirements.
- How courts treat disclosure gaps in negligence, product liability, or consumer protection claims.
- State action on high-risk smaller models (therapy, companions) and sector-specific rules.
- Federal action on liability and any waiver regime's scope and standards.
Resources
Bottom line: this is a disclosure statute with signaling value. Treat it as an early compliance floor, not a ceiling, and build a record that will stand if-when-obligations expand.
Enjoy Ad-Free Experience
Your membership also unlocks: