California enacts AI chatbot and social media safeguards to protect kids

California sets new rules for social platforms and AI companion bots to protect minors. SB 243 mandates age checks, clear AI disclosures, and self-harm safeguards by Jan 2026.

Categorized in: AI News Legal
Published on: Oct 14, 2025
California enacts AI chatbot and social media safeguards to protect kids

California enacts AI chatbot safeguards for minors: what counsel needs to know

California has enacted new laws to regulate social media platforms and AI companion chatbots with a focus on protecting children. The package includes age verification, suicide and self-harm response protocols, and prominent warnings for AI companion bots.

Central to the package is SB 243, introduced by Senators Steve Padilla and Josh Becker. The law requires platforms to clearly disclose to minors that AI companions are machine-generated and may be unsuitable for children. It is expected to take effect in January 2026.

Key requirements under SB 243

  • AI disclosure to minors: AI companion chatbots must inform under-18 users that the bot is AI-generated and may not be appropriate for children.
  • Age verification: Platforms must implement mechanisms to determine user age before minors access AI companions or related features.
  • Self-harm protocols: Services must implement procedures to address suicide and self-harm content or signals, including escalation paths.
  • Warnings and UX prompts: Companion chatbots must present clear warnings to minors at interaction points.
  • Liability posture: The bills narrow the ability of companies to claim their systems "act autonomously" to avoid responsibility.
  • Scope: Likely applies to social media companies and websites offering AI services to California residents, including decentralized social platforms and gaming environments.
  • Effective date: SB 243 is expected to take effect January 2026.

Bill text and updates: California SB 243.

Why this matters for legal and compliance teams

The disclosure duty to minors and required safety protocols will drive product changes, data flows, and vendor dependencies. The liability language puts more weight on enterprise governance instead of blaming model autonomy.

Counsel should expect closer scrutiny of content moderation, crisis escalation, and age gating. Decentralized and gaming platforms are expressly in the conversation, which complicates enforcement and product architecture.

Action checklist

  • Scope assessment: Identify AI companion or conversational features accessible to California users, and determine minor exposure.
  • Age gating: Implement or upgrade age verification with auditable logs and appeal paths.
  • Minor-facing disclosures: Draft clear, repeated disclosures that the bot is AI-generated and may be unsuitable for children; localize where necessary.
  • Self-harm protocols: Define detection thresholds, UX interventions, referral resources, and staff escalation procedures. Test and document.
  • Data governance: Map data used for age checks and safety triggers; define retention, minimization, and security controls.
  • Contracting: Update vendor and model-provider agreements to cover safety features, incident support, uptime for safety tooling, and audit rights.
  • Policies and terms: Update ToS/community guidelines to reflect minor protections, bot disclosures, and safety interventions.
  • UX and QA: Embed warnings at first use and at sensitive prompts; add rate limits and safe-mode defaults for minors; run red-team tests.
  • Governance: Stand up a cross-functional review (legal, policy, trust and safety, engineering) with sign-offs before launch or material changes.
  • Multi-state alignment: Utah enacted a disclosure requirement for AI chatbots in 2024; plan for harmonized disclosures across states.

Timeline and planning

With an expected effective date of January 2026, teams should target design lock by mid-2025 to allow for engineering, testing, and documentation. Build a compliance record now: risk assessments, decision logs, and training evidence will support your posture if questioned.

Federal backdrop

In June, Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act to create immunity from certain civil liability for AI developers in sectors like healthcare, law, and finance. The bill received mixed reactions and was referred to the House Committee on Education and the Workforce.

Monitor any federal movement that could interact with state-level duties. Until preemptive rules are clear, state compliance will drive the baseline for product and policy decisions.

Practical next steps

  • Appoint an executive owner for minor safety and AI disclosures.
  • Budget for age verification tooling, moderation enhancements, and incident response training.
  • Run a tabletop exercise for a self-harm incident involving a minor and capture lessons learned.

If your team needs structured upskilling on AI safety, policy, and product compliance, see curated options at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)