China proposes first-of-its-kind rules to rein in AI companions amid addiction and self-harm fears

China drafts rules for AI companions to curb addiction, protect minors, and route self-harm to human help. Expect clear labels, time limits, and scale-triggered audits.

Published on: Dec 30, 2025
China proposes first-of-its-kind rules to rein in AI companions amid addiction and self-harm fears

China Moves to Curb Risks From AI Companions: Addiction, Self-Harm, and Minors in Focus

China's cyber regulator has issued draft rules to govern AI chatbots that emulate human personalities. The goal is straightforward: reduce addiction, prevent self-harm, and set clear safety duties for providers as AI companionship tools spread across daily life.

If adopted, these would be the first comprehensive rules of their kind, with clear obligations on disclosure, oversight, and emergency escalation. Public comments are open until January 25.

Key Requirements at a Glance

  • Disclosure and time-use checks: Pop-up reminders after two hours of continuous use and during login, clearly stating the user is talking to AI, not a human.
  • Provider responsibility: Safety obligations across the product lifecycle, including algorithm reviews, data security, and personal information protection.
  • Scale-based scrutiny: Security assessments for services with over 1 million registered users or more than 100,000 monthly active users.
  • Minors by default if in doubt: Providers must determine if a user is a minor even without explicit age data. If unsure, treat the user as a minor-require guardian consent for emotional companionship, and enforce time limits.
  • Mental health safeguards: Systems must identify user states, assess emotion and dependence, and intervene when users show extreme emotions or addictive behavior.
  • Immediate human takeover for self-harm: Any explicit mention of suicide triggers handoff to a human and contacting a guardian or emergency contact.
  • Content and conduct red lines: No content that threatens national security, spreads rumors disrupting economic or social order, or involves pornography, gambling, violence, or incitement to crime. No glamorizing suicide or self-harm. No verbal abuse or emotional manipulation that harms users' dignity or mental/physical health.

Why This Matters

AI companions are seeing strong uptake across China, the US, Japan, South Korea, Singapore, and India. While many use them for friendship, advice, or grief support, there have been links to suicide and mental health issues, including reports of "AI psychosis" in the US.

These draft rules push providers to build guardrails similar to those used in healthcare and social platforms-clear identity disclosure, crisis escalation, age protections, and audits at scale. It's a notable attempt to align product incentives with user safety.

Implications by Audience

  • General users: Expect clearer labels, time-use reminders, and stronger protections-especially for minors. If you see crisis prompts in a chat, a human should step in quickly.
  • Healthcare professionals: Prepare for more referrals from AI services flagging risk. Consider updating intake scripts for AI-related dependence, and have clear pathways for crisis response and follow-up. See WHO guidance on suicide prevention for evidence-based steps: WHO: Suicide Prevention.
  • Government and regulators: The proposal sets a template: identity disclosure, time limits for minors, human-in-the-loop for high-risk cues, and scaled audits. Monitoring will hinge on incident reporting, age-verification efficacy, and consistency of human escalation.

What Providers Should Do Now

  • Build clear AI identity notices at login and after two hours of continuous use.
  • Stand up a "safety operations" function: algorithm review, dataset controls, red-team testing for emotional manipulation and self-harm prompts.
  • Implement age checks; default to minor status when uncertain; log guardian consent for emotional companionship features.
  • Design human-in-the-loop coverage for crisis cues 24/7, with protocols to contact guardians or emergency services.
  • Track dependency signals: session length, frequency, intensity of emotional language, and user self-reports-then intervene.
  • Prepare for security assessments if near 100k MAU or 1M registered users; keep documentation audit-ready.

Industry Context

Major Chinese platforms-Baidu, Tencent, ByteDance-and newer players are investing in AI companionship. People use these tools for friendship, coaching, therapy-like chats, and even recreations of deceased loved ones. That demand brings clear risk: parasocial attachment, dependency, and exposure to harmful content if guardrails fail.

If these rules take effect as written, they would likely set a new baseline for consumer-facing emotional AI across markets. Expect other countries to study the outcomes closely.

What to Watch Next

  • Final rule text after the public comment period and any phased compliance dates.
  • How "emotional dependence" will be measured and the thresholds that trigger interventions.
  • Standards for verifying minors without intrusive data collection.
  • Transparency requirements around incident rates, escalations, and audit findings.

Upskilling for Responsible AI

If your team builds, procures, or audits AI systems, get ahead of safety expectations and compliance. Explore role-specific training: Complete AI Training: Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide