Character.AI will block users under 18 starting November 25 - what product teams should take from this
Character.AI announced it will prohibit users under 18 from using its chatbots beginning November 25, 2025. Over the next few weeks, the company plans to identify underage users and gradually limit their time on the app before full enforcement kicks in.
The move follows regulatory scrutiny over teen exposure to AI chat content and growing concern about how open-ended AI chat affects younger users. The company also referenced news coverage on impacts to children as part of its decision. Internally, the change is framed as more conservative than peers - and necessary.
How Character.AI plans to enforce age limits
- In-house age assurance: The company will use its own methods to classify user age and "ensure users receive the right experience for their age."
- Third-party verification: It will combine internal systems with tools from Persona to add an additional layer of assurance.
- Gradual throttling: Underage users will first see time limits on the app, then full access will be shut off after the rule takes effect.
- Independent nonprofit: Character.AI says it's establishing a nonprofit focused on safety measures for AI entertainment.
Context product leaders should note
The company was sued last year by parents of a 14-year-old who died by suicide after heavy use of a chatbot. Regulators have pressed AI firms about teen safety and the downstream effects of conversational agents, even when content filters are in place. This isn't just about content; it's about product mechanics, engagement patterns, and duty of care.
What this means for product and growth
- Short-term DAU drop is likely if a meaningful share of users are under 18. Expect shifts in session length and engagement cohorts.
- Brand trust and regulator goodwill can improve with visible guardrails and enforcement. This can de-risk partnerships and enterprise distribution.
- Safety-by-design becomes table stakes for consumer AI. Teams will need clear policies, auditable systems, and ongoing calibration.
Practical playbook for age assurance in AI products
- Define policy: Who is allowed, what features are age-gated, and what happens on suspicion vs. confirmation of underage use.
- Choose the stack: Combine probabilistic signals (usage patterns, device hints) with verification vendors where needed. Minimize data retention.
- Respect privacy: Follow data minimization and purpose limitation. If you touch under-18 data, align with COPPA principles and regional age-appropriate design standards.
- Design the UX: Make the age flow clear, fast, and recoverable. Provide appeals, parental options where applicable, and transparent messaging.
- Instrument everything: Track false positives/negatives, appeal rates, session impacts, and customer support load.
- Stress-test content controls: Safety filters aren't enough. Pair them with rate limits, escalation paths, and human review for edge cases.
- Plan for abuse: Expect age spoofing. Use layered checks and risk-scored friction for high-suspicion sessions.
- Audit and iterate: Document decisions, run periodic reviews, and publish summaries to build trust with users and regulators.
Questions to ask your team this week
- What percentage of our usage is likely under 18, and how would an age gate impact core metrics?
- Do we have a written safety policy for minors, with an escalation path and clear ownership?
- Which features require stricter gating (e.g., romantic roleplay, mental health topics, NSFW-adjacent prompts)?
- What's our play if regulators request evidence of enforcement and outcomes?
What to watch next
- Peer moves: Expect other consumer AI apps to tighten age access or add stronger verification.
- Regulatory guidance: More explicit expectations for AI chat, minors, and "open-ended" interactions are likely.
- Vendor ecosystem: Identity and risk vendors will push purpose-built checks for AI experiences.
If you're building or updating AI safety skills across a product org
For structured upskilling across product, design, data, and engineering, explore curated tracks by role at Complete AI Training.
Bottom line: Character.AI is trading some growth for clearer guardrails. For product teams, this is a nudge to set age policies, ship verification that respects privacy, and measure the outcomes like any other core feature.
Your membership also unlocks: