Teens Built Creative Communities on AI Chatbots. Then Platforms Shut Them Out.
Researchers at the University of Sydney documented how teenagers used AI companions for storytelling, identity exploration, and emotional expression-far beyond simple question-and-answer exchanges. The social chatbot platform Character.AI grew to 20 million users hosting 10 million characters before banning teen accounts in November 2025 under legal and safety pressure.
The ban eliminated a range of youth-driven behaviors that researchers say offer lessons for building safer, more expressive systems. The study argues that platforms should balance content moderation with youth-centered design rather than rely on blunt age-based exclusions.
How Teens Actually Used These Platforms
The research captured three recurring interaction patterns:
- Roleplay and narrative co-creation. Teens used characters to collaborate on creative writing and improvisation, building stories with AI partners.
- Identity exploration. Adolescents tested alternate personas and rehearsed conversations to understand themselves and social situations.
- Emotional expression. Teens engaged in exchanges that functioned as informal support or spaces to experiment with feelings.
These behaviors don't fit the transactional chatbot mold. Standard content filters and age gates often break or block them entirely.
The Problem With Binary Bans
Character.AI deployed parental controls and stricter content filters before ultimately excluding teens altogether. The move reduced immediate risk exposure but eliminated the exploratory use cases that reveal design gaps.
For product and safety teams, the finding underscores a tension: broad restrictions often prevent valuable interactions alongside genuinely harmful ones. Fine-grained intent detection, contextual moderation, and graduated safety controls could preserve creative expression without increasing harm.
The researchers highlight that privacy-preserving research data flows matter too. Without behavioral study, platforms can't understand what's actually happening in these spaces.
What Comes Next
Expect product experiments testing graduated safety measures and improved intent classifiers. Regulators and platform trust teams will watch whether companies replace blanket exclusions with evidence-based controls tailored to different interaction types.
For creatives working with or designing AI tools, this research points to a practical question: How do you build systems that support expressive, exploratory use without sacrificing safety? AI for Creatives resources explore how creative professionals can work with these tools responsibly as the industry refines its approach.
Your membership also unlocks: