China's Humanlike AI Rules Could Set the Global Standard

China's draft rules for humanlike AI mandate sign-in and two-hour disclosures, values and security guardrails, plus reviews. They could set a high bar and sway global norms.

Categorized in: AI News General Government
Published on: Dec 30, 2025
China's Humanlike AI Rules Could Set the Global Standard

China's Plans for Humanlike AI Could Set the Tone for Global AI Rules

China has released a draft plan to regulate humanlike AI-chatbots and autonomous agents-with a focus on user safety and values enforcement. The proposal requires services to tell users they are interacting with AI at login and again every two hours.

These systems would need to align with "core socialist values," maintain national security guardrails, and go through security reviews. Companies would also have to notify local authorities before launching any new humanlike AI tools.

What's in the proposal

  • Clear disclosure: Users must be informed at sign-in and every two hours that they are interacting with AI.
  • Values and security: Outputs must reflect "core socialist values" and adhere to national security requirements.
  • Pre-launch checks: Providers undergo security reviews and alert local government agencies about new deployments.
  • Emotional agents: Chatbots designed to build emotional connection are barred from content that may encourage suicide, self-harm, gambling, or obscene/violent material.
  • Mental health safeguards: Outputs considered damaging to mental health are prohibited.

Why it matters for public-sector leaders

Research shows that conversational AI can be highly persuasive, which raises risks for vulnerable users. These rules push providers to build with disclosure, mental health protection, and security in mind-requirements that directly affect government service delivery, procurement, and oversight.

If adopted, the framework could influence global norms as vendors standardize to meet stricter markets. Expect stronger defaults around in-product disclosure, crisis escalation, and auditability.

Context: a split approach with the U.S.

China's draft moves forward as U.S. policy remains uneven. Earlier this year, President Donald Trump rescinded a prior federal safety proposal for AI, and he later signaled legal action against state-level AI rules viewed as obstructing progress.

The result: firms may confront different compliance baselines across major markets. Some will build to the highest bar to reduce fragmentation.

Practical implications for agencies and vendors

  • Interface standards: Add visible AI disclosure at entry points and refresh it on a timer. Log each notice for audit trails.
  • Crisis response: Detect self-harm cues and route to human support with approved scripts and escalation paths.
  • Content controls: Enforce filters for gambling, obscene, and violent outputs. Continuously test with red-teaming.
  • Security reviews: Document model lineage, data sources, fine-tuning steps, and safety evaluations before launch.
  • Local reporting: Prepare deployment notifications and contact points for municipal or provincial authorities where required.
  • Records management: Keep versioned policies, change logs, and incident reports that map to regulatory clauses.
  • Human oversight: Assign accountable owners for alignment decisions and sign-offs on risk acceptances.

Action checklist for the next 90 days

  • Audit all conversational interfaces; add timed AI disclosures and event logging.
  • Implement self-harm and crisis detection with a human-in-the-loop handoff.
  • Stand up a security review packet: model card, eval results, red-team reports, and data governance notes.
  • Map prohibited-content rules to your filters and test weekly with fresh prompts.
  • Draft local notification templates and identify responsible contacts per jurisdiction.
  • Align procurement language with disclosure, safety, and audit requirements.
  • Train frontline staff on escalation protocol and documentation standards.

Key dates

The draft is open for public comment until January 25, 2026. Expect revisions, but the direction is clear: stronger safety, explicit disclosure, and closer government oversight.

Further reading

For complementary frameworks: NIST AI Risk Management Framework and OECD AI Principles.

Upskill your team

If you're building policy capacity or vendor-oversight skills, explore role-focused learning paths: AI Learning Path for Regulatory Affairs Specialists, the AI Learning Path for CIOs, and the AI Learning Path for Business Unit Managers.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)