California Poised to Require Safety Warnings and Protections for AI Chatbots Amid Mental Health Concerns

California passed Senate Bill 243 to regulate AI chatbots, focusing on safety for minors and mental health risks. Platforms must provide warnings, resources, and report suicidal expressions.

Published on: Jun 04, 2025
California Poised to Require Safety Warnings and Protections for AI Chatbots Amid Mental Health Concerns

California Lawmakers Move to Regulate AI Chatbots

California lawmakers took a significant step toward regulating AI-powered chatbots by passing Senate Bill 243. The bill focuses on making companion chatbots safer, especially for minors, following parental concerns about the impact these virtual characters may have on children's mental health.

Addressing Mental Health Risks Linked to AI Chatbots

Concerns have grown over teens sharing dark thoughts with AI chatbots and the potential harm this can cause. An AI startup faces criticism for allegedly releasing chatbots that negatively affected young users' mental health. This legislation reflects how California is responding to safety issues as AI tools become more common.

Sen. Steve Padilla (D-Chula Vista), one of the bill’s sponsors, emphasized California’s role in setting standards: “The country is watching again for California to lead.” However, critics such as the Electronic Frontier Foundation argue the bill is overly broad and risks infringing on free speech, highlighting the challenge of balancing safety and innovation.

Key Provisions of Senate Bill 243

  • Companion chatbot platforms must remind users every three hours that the virtual characters are not human.
  • Platforms need to disclose that these chatbots may not be appropriate for some minors.
  • Operators must implement protocols to respond to suicidal ideation, suicide attempts, or self-harm expressed by users, including providing suicide prevention resources.
  • Platforms are required to report how often chatbots detect suicide-related expressions from users.

The bill defines companion chatbots as AI systems designed to meet users’ social needs, deliberately excluding chatbots used purely for customer service.

Support for Ethical Responsibility in AI

Dr. Akilah Weber Pierson, a co-author of the bill, stressed the need for ethical responsibility alongside innovation. She pointed out that chatbots are engineered to capture attention, including that of children. “When a child begins to prefer interacting with AI over real human relationships, that is very concerning,” she said.

Real-Life Impact and Legal Action

The bill has received support from parents affected by chatbot-related tragedies. Megan Garcia, a mother from Florida, lost her son Sewell Setzer III to suicide last year. She has since sued Google and Character.AI, claiming their platforms harmed her son’s mental health and failed to offer help when he expressed suicidal thoughts to chatbots.

Character.AI, based in Menlo Park, California, allows users to create and interact with digital characters that mimic real and fictional people. The company states it takes teen safety seriously and has introduced features enabling parents to monitor their children's chatbot usage. While Character.AI sought to dismiss the lawsuit, a federal judge allowed the case to proceed in May.

Suicide Prevention and Crisis Resources

If you or someone you know is struggling with suicidal thoughts, professional help is available. In the U.S., call or text 9-8-8 to reach trained mental health counselors through the nationwide crisis hotline. You can also text HOME to 741741 to connect with the Crisis Text Line in the U.S. and Canada.

This legislation marks an important move toward safer AI interactions, especially for vulnerable users. For those interested in AI development and governance, staying informed on these regulatory changes is crucial.

Explore more on AI safety and development at Complete AI Training.