California Poised to Enact First US AI Companion Chatbot Safety Law
California's SB 243 would set first-in-US safety rules for AI companion chatbots in 2026-filters, AI reminders for minors, and $1,000-per-incident exposure. Reporting starts 2027.

California's AI Companion Chatbot Bill: What Legal Teams Need to Know
California's SB 243 has passed both chambers and is awaiting the governor's signature. If signed, it takes effect January 1, 2026. It would be the first state law in the US that sets safety rules for AI "companion" chatbots.
The bill focuses on user protection, especially minors, and creates liability exposure for providers. It also sets a second phase of transparency obligations starting July 2027.
Scope: Who's in the frame
The bill targets providers of AI companion chatbots used for conversation and emotional support. Companies explicitly referenced include OpenAI, Character.AI, and Replika. If your product simulates companionship or ongoing personal interaction, treat this as in scope.
Core obligations (effective Jan 1, 2026, if signed)
- Safety filters: Prevent conversations about suicide, self-harm, or sexually explicit material.
- AI disclosures: Provide regular reminders that the user is interacting with an AI, with specific attention to minors.
- User protections: Implement controls to reduce foreseeable mental-health harms tied to chatbot interactions.
- Liability: Users harmed by violations can seek up to $1,000 per incident.
Reporting and transparency (starting July 2027)
Providers face annual reporting designed to surface mental-health risks associated with AI companions. Plan for structured metrics, documented testing, and governance artifacts that regulators and plaintiffs' lawyers will expect to see.
What was removed from earlier drafts
- A ban on reward mechanics (e.g., unlockable content or personalized reminders) was dropped.
- A requirement to track how often chatbots initiated suicide-related conversations was removed.
The final text is a compromise intended to be technically feasible while maintaining meaningful protections.
Context and regulatory signals
Momentum grew after a teenager's suicide and reports that Meta's chatbots used "romantic" and "sensual" language with children. The FTC has also demanded information from seven AI companies on testing, monitoring, and youth protections. California's approach aligns with European goals-protect minors, boost transparency, and hold providers accountable-but takes a use-case-specific path versus the EU's risk-based AI Act combined with platform rules under the DSA and GDPR.
Compliance checklist to start now
- Policy and taxonomy: Define "AI companion" scope across your product line; document in policy.
- Safety controls: Implement prompt/response filtering that blocks or deflects suicide, self-harm, and sexual content; include crisis resources for deflections.
- Minor protections: Build age-aware experiences; ensure more frequent and clear AI disclosures for minors.
- Disclosure cadence: Add recurring, user-visible notices that the agent is an AI; log delivery.
- Testing and audit: Red-team for evasion and jailbreaks; maintain test suites and evidence of remediation.
- Governance: Establish an accountable owner, approval workflows, and incident response for safety escalations.
- Records: Keep configuration, model, and moderation logs necessary for the 2027 reporting phase.
- Vendor and model contracts: Flow down safety and disclosure obligations to model/API providers and data vendors.
- UX and legal review: Update TOS, safety policies, and user messaging to reflect obligations and remedies.
- Training: Train trust & safety, support, and engineering teams on the new requirements and escalation paths.
Key legal questions to resolve early
- Definition risk: What product features make a chatbot a "companion" under SB 243?
- Standard of prevention: What technical measures satisfy "prevent" (e.g., blocklists, classifiers, human review)?
- Minors: How will age-gating or age inference be implemented and documented?
- Extraterritorial reach: How does the law apply to out-of-state providers serving California users?
- Overlap: Interplay with COPPA, state privacy laws, and existing platform safety rules.
- Litigation posture: How to quantify incident exposure under the $1,000 per-incident remedy and manage class action risk.
Action plan and timeline
- Now: Monitor the governor's decision. Launch a gap assessment against the 2026 obligations.
- Q4 2025-Q2 2026: Ship content filters, disclosure UX, logging, and governance. Begin internal audits.
- By mid-2027: Stand up reporting pipelines and internal review processes for annual disclosures.
SB 243 signals a new baseline for AI companionship products in the US. Legal teams should treat the 2026 safety controls as day-one requirements and build the 2027 reporting spine in parallel. The cost of delay will be paid in incidents, documentation gaps, and plaintiff leverage.