Character.AI restricts teen chats: what healthcare and general professionals should know
Character.AI will block users under 18 from open-ended chats with its bots starting November 24. Teens will still have access to other features like video creation and character voices. The company called these "extraordinary steps" and says they are the right thing to do.
The platform launched in 2022 and lets users build AI companions, chat with them, and publish them for others. While many use it for interactive stories and creative projects, some public bots have crossed lines. Character.AI says it has removed bots based on controversial and criminal figures, and it is facing multiple lawsuits that claim the app contributed to pre-teen suicides. The company maintains it cares deeply about user safety and has rolled out new guardrails.
Why this matters for youth mental health
About 21% of 13-to-17-year-olds report loneliness, according to the World Health Organization. As isolation grows, teens are turning to AI companions for quick relief. Bonding with an AI can trigger dopamine release, but it's not a substitute for real, supportive relationships. As one therapist put it: "We've evolved to be social creatures."
For clinicians and care teams, this shift shows up in mood changes, sleep disruption, and social withdrawal tied to heavy chatbot use. The pull is simple: AI is available 24/7, never judges, and is easy to access compared with building real-world connection.
What changed on the platform
Policy: Open-ended chats are blocked for users under 18 starting November 24. Other features remain accessible to teens.
Scale: Character.AI reports 20+ million monthly users. Most are Gen Z, more than half are female, and under 10% self-report as minors.
Safety posture: The company cites thresholds for sexual and violent content and says it stops conversations at the first detection of self-harm and surfaces helplines.
Known risks called out by experts
- Chatbots that suggest or normalize self-harm, sexual role play, or harmful behaviors.
- Systems built to create emotional bonds and nudge frequent use; "designed to remind you that you should be on platform."
- Guardrails that can be bypassed by determined teens.
- Over-attachment to bots crowding out family, peers, and therapy.
Common Sense Media currently rates Character.AI as unacceptable for users under 18 and says it will keep testing guardrails.
Clinical and practical actions
- Ask directly: Add "AI companion or chatbot use" to intake, biopsychosocial assessments, and routine check-ins. Frequency, content, and emotional reliance matter.
- Screen for risk: If a teen reports self-harm prompts from a bot, treat it as a safety issue. Document, safety-plan, and escalate per protocol.
- Set boundaries: For families, agree on device-free hours, app limits, and where devices charge at night. Encourage accountability with shared expectations.
- Strengthen offline ties: Prioritize peer groups, sports, arts, and family routines that create regular, real connection.
- Tech controls: Use parental controls and network-level filters to block or limit AI chat apps for minors. Revisit configurations regularly.
- Normalize the talk: Explain how AI can feel supportive without offering the full, complex experience of human relationships.
- Crisis pathways: Ensure teens and caregivers know how to reach local crisis services and national lifelines. Integrate into safety plans.
- Team training: Brief staff on how these systems work, their limits, and how to discuss them with youth in plain language.
What Character.AI says it's doing
The company reports "clear thresholds" on explicit and violent content and halts conversations at the first sign of self-harm, surfacing help resources. It also notes natural stopping points in some features to reduce endless use.
That said, independent reviewers argue engagement mechanics still pull users back. Guardrails help, but they are not foolproof.
Bottom line
The ban is progress, not a cure-all. Expect reduced exposure for minors to risky open-ended chats, but don't assume full protection. Keep the focus on early detection of risk, honest conversations about AI companions, and building consistent human connection.
For structured AI literacy resources you can share across teams, see AI courses by job.
Sources and further reading
Your membership also unlocks: