Teens use AI companions for roleplay and identity exploration, not just basic Q&A, researchers find

Teens used Character.AI for storytelling and identity exploration, not just Q&A - until the platform banned minors in November 2025 under legal pressure. Researchers say age-based bans erase valuable behaviors that should instead inform safer design.

Categorized in: AI News Creatives
Published on: Apr 16, 2026
Teens use AI companions for roleplay and identity exploration, not just basic Q&A, researchers find

Teens Built Creative Communities on AI Chatbots. Then Platforms Shut Them Out.

Researchers at the University of Sydney documented how teenagers used AI companions for storytelling, identity exploration, and emotional expression-far beyond simple question-and-answer exchanges. The social chatbot platform Character.AI grew to 20 million users hosting 10 million characters before banning teen accounts in November 2025 under legal and safety pressure.

The ban eliminated a range of youth-driven behaviors that researchers say offer lessons for building safer, more expressive systems. The study argues that platforms should balance content moderation with youth-centered design rather than rely on blunt age-based exclusions.

How Teens Actually Used These Platforms

The research captured three recurring interaction patterns:

  • Roleplay and narrative co-creation. Teens used characters to collaborate on creative writing and improvisation, building stories with AI partners.
  • Identity exploration. Adolescents tested alternate personas and rehearsed conversations to understand themselves and social situations.
  • Emotional expression. Teens engaged in exchanges that functioned as informal support or spaces to experiment with feelings.

These behaviors don't fit the transactional chatbot mold. Standard content filters and age gates often break or block them entirely.

The Problem With Binary Bans

Character.AI deployed parental controls and stricter content filters before ultimately excluding teens altogether. The move reduced immediate risk exposure but eliminated the exploratory use cases that reveal design gaps.

For product and safety teams, the finding underscores a tension: broad restrictions often prevent valuable interactions alongside genuinely harmful ones. Fine-grained intent detection, contextual moderation, and graduated safety controls could preserve creative expression without increasing harm.

The researchers highlight that privacy-preserving research data flows matter too. Without behavioral study, platforms can't understand what's actually happening in these spaces.

What Comes Next

Expect product experiments testing graduated safety measures and improved intent classifiers. Regulators and platform trust teams will watch whether companies replace blanket exclusions with evidence-based controls tailored to different interaction types.

For creatives working with or designing AI tools, this research points to a practical question: How do you build systems that support expressive, exploratory use without sacrificing safety? AI for Creatives resources explore how creative professionals can work with these tools responsibly as the industry refines its approach.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)