California Enacts Frontier AI Act, First US Law Requiring AI Transparency and Risk Disclosures

California's TFAIA requires large AI developers to disclose risk and safety info. Counsel should assess scope, prep summaries and incident reports, and tighten model security.

Categorized in: AI News Legal
Published on: Oct 06, 2025
California Enacts Frontier AI Act, First US Law Requiring AI Transparency and Risk Disclosures

California's new AI transparency law: what in-house counsel needs to do now

California has enacted SB 53, the Frontier Artificial Intelligence Act (TFAIA). It is the first US state law that requires large AI companies to disclose risk assessments and safety information about advanced models.

This sets a baseline for transparency and governance expectations that will influence contracts, compliance programs, and product release processes-whether your company is based in California or simply sells into the state.

Who is likely in scope

The law targets "large" AI developers and operators of frontier or foundation models. Expect thresholds tied to model scale, capability, or user reach. Smaller builders may be outside initial scope, but downstream deployers of covered models could still inherit duties through contracts and procurement.

Action: review statutory definitions and any forthcoming rulemaking to confirm whether your org, vendors, or key partners are covered.

Core obligations to plan for

  • Risk assessment disclosures: Periodic assessments of safety, security, misuse, and systemic risks, with summaries or filings to designated authorities.
  • Safety practices: Evidence of evaluations such as red-teaming, safety policies, and mitigations before and after release.
  • Incident reporting: Timely reports of material safety incidents or misuse, plus corrective actions.
  • Security of model assets: Controls for model weights, access, and supply chain integrity.
  • Recordkeeping: Documentation that supports disclosures and demonstrates ongoing risk management.
  • Governance and accountability: Named owners for compliance, internal escalation paths, and board-level oversight cadence.

Disclosure mechanics and confidentiality

Expect a mix of public disclosures and regulator-only submissions. Plan how you will summarize risk information without exposing trade secrets or sensitive security details. Establish a clear review path with legal, security, and communications before any public release.

Enforcement posture

Enforcement will likely sit with state authorities, with penalties for non-compliance and the potential for injunctive relief. Good-faith, documented compliance can reduce exposure. Track rulemaking for penalty amounts, cure periods, and audit standards.

How it fits with other frameworks

  • EU AI Act: Overlaps in risk management, transparency, incident reporting, and documentation. Map TFAIA duties to EU requirements to avoid duplicate work. See the final text on EUR-Lex: EU AI Act.
  • US federal direction: Aligns with the White House AI Executive Order emphasis on safety testing and reporting. Reference: US AI Executive Order.
  • Sector laws: Coordinate with privacy (HIPAA/CPA/CCPA), financial services, and critical infrastructure rules that may already require risk controls.
  • Preemption risk: Monitor challenges on interstate commerce or federal preemption grounds that could alter scope or enforcement timing.

Immediate checklist for legal teams

  • Scope and applicability: Identify covered models, services, and distribution into California.
  • Gap assessment: Compare current AI governance against likely TFAIA duties (risk assessments, testing, incident reporting, security, documentation).
  • Disclosure playbook: Build a template for risk summaries, approval workflow, and regulator engagement.
  • Incident protocol: Define thresholds, timelines, roles, and evidence collection for AI-related incidents.
  • Contracts: Update supplier and customer terms for audit rights, safety testing evidence, incident cooperation, and allocation of risk.
  • Model security: Tighten controls over weights, deployment endpoints, and third-party access; document them.
  • Board reporting: Establish a regular briefing on model risks, mitigations, and compliance status.
  • Training: Brief engineering, product, and go-to-market teams on disclosure boundaries and incident triggers.
  • Monitoring: Track rulemaking, interpretive guidance, and multistate copycat bills to keep your controls current.

Timelines and preparation

Confirm the law's effective date, grace periods, and any phased thresholds. Start with high-impact systems and vendors. Pilot your risk assessment template now so you can publish or file on day one without scrambling.

Open questions to resolve early

  • How "large" and "frontier" are defined for your specific models and use cases.
  • Which risks must be disclosed publicly versus regulator-only, and how to protect confidential material.
  • Acceptable testing standards and audit evidence (e.g., red-team scope, reproducibility, third-party attestations).
  • Cross-border operations: which entities file, and how filings interact with EU and UK requirements.
  • Remedies and liabilities in commercial agreements if a counterparty fails to meet TFAIA duties.

Bottom line

California just set a clear expectation: if you build or deploy advanced AI at scale, you must show your work on safety and risk. Treat TFAIA as the baseline and design your program so it can scale to federal and international requirements with minimal rework.

If your team needs structured upskilling on AI governance and compliance, see curated options by role here: AI courses by job.