California moves first on AI safety, streaming ad volume, and account deletion
Federal rules are inching along. California just set the pace with three new laws that will shape how AI labs, streamers, and social platforms operate inside the state-and likely beyond.
For in-house counsel and outside advisors, the takeaway is simple: scope exposure now, document proof of compliance, and prepare to answer regulators with facts, not promises.
AI: "Transparency in Frontier Artificial Intelligence Act"
The state's new AI law pushes safety and disclosure to the front of the compliance stack. AI developers must disclose potential harms from their systems and provide the safety protocols they use to reduce catastrophic risk.
Enforcement sits with California's Office of Emergency Services. Coverage is triggered if a company hits a defined compute threshold dedicated to model training or posts at least $500 million in annual revenue. The law also includes protections for whistleblowers.
State leadership argues stronger safety rules and innovation can coexist. Many observers see this as a template other jurisdictions will copy.
- Action for counsel: confirm whether your org (or vendors) meets the compute or revenue threshold.
- Stand up written safety protocols, red-teaming records, and incident response plans you can hand to OES on request.
- Build a disclosure workflow: who drafts risk statements, who signs, and how updates are versioned.
- Refresh whistleblower policies, escalation paths, and anti-retaliation training.
- Add AI safety and disclosure obligations to vendor contracts; require attestations if you rely on third-party models.
Streaming ads: SB 576 extends volume limits
California extended TV commercial volume rules to streaming environments, including ad-supported platforms. The rule: commercial volume can't exceed the viewer's chosen setting, closing a gap that let streaming ads spike louder than the show.
This builds on the federal Commercial Advertisement Loudness Mitigation (CALM) Act for broadcast and cable. See the FCC's overview of the CALM Act for context: FCC CALM Act.
- Audit ad insertion pipelines across CTV and mobile apps; test actual loudness, not just spec targets.
- Flow down volume compliance and remediation SLAs to ad networks, SSPs, and creative partners.
- Log measurement, exceptions, and fixes to prove due diligence to regulators and plaintiffs' counsel.
Account cancellation and data deletion: AB 656
Social media companies must make canceling accounts straightforward and ensure personal data is deleted immediately after cancellation. This pairs well with the FTC's "Click to Cancel" direction on subscriptions.
Reference: FTC proposal to make subscription cancellations simpler: FTC Click to Cancel.
- Remove dark patterns; ensure parity between sign-up and cancel paths (same channel, same friction).
- Implement deletion across primary stores, backups with practical timelines, logs, and vendor systems-then send confirmation to users.
- Update DPAs to require processors to delete on your signal and certify completion.
- Align retention schedules and legal hold processes so "immediate deletion" does not collide with preservation duties.
- Test cancel and deletion flows quarterly; document results and fixes.
Political context-and what it means for risk
Governor Gavin Newsom has backed aggressive tech regulation, child safety, and consumer protection-but also vetoed a sweeping 2024 bill that would have made AI companies liable for downstream harms. Expect California to keep pushing process, safety, and disclosure requirements while stopping short of blanket strict liability.
If your products touch California users-or your vendors do-build a single compliance playbook you can reuse in other states. It's cheaper than retrofitting after the fact.
What legal teams should do this quarter
- Map exposure: AI development, streaming ad operations, and social account flows; include third parties.
- Stand up evidence: written AI safety protocols, OES-facing disclosures, ad volume reports, deletion certificates.
- Update contracts: AI safety attestations, volume compliance, deletion SLAs, audit rights, and indemnities.
- Train customer support and engineering on cancellation, deletion, and escalation steps.
- Brief the board and set a metrics dashboard (incidents, fixes, regulator inquiries).
If your team needs a structured primer for AI risk, governance, and vendor oversight, see curated training by job role: Complete AI Training.
Your membership also unlocks: