China's AI 2025: Breakthrough Models, Labeling Rules, Crackdowns, and What Compliance Now Requires

China's AI rules tighten: dual content labels, 4-hour reporting, routine checks, clearer filings. Courts press platforms; legal teams: update disclosures, PIPIA, and minors' safeguards.

Categorized in: AI News Legal
Published on: Dec 17, 2025
China's AI 2025: Breakthrough Models, Labeling Rules, Crackdowns, and What Compliance Now Requires

China's AI Governance in 2025-2026: What Legal Teams Need to Know

China's AI sector moved fast over the past year. DeepSeek R1 pushed reasoning performance and cost efficiency into the spotlight, and major players followed with upgrades in scale, multimodality, and training efficiency. Adoption is broad: search, office, education, and beyond.

Regulation kept pace. The amended Cybersecurity Law adds an explicit AI compliance provision and takes effect on January 1, 2026. Building on the Algorithm Recommendation Measures, Deep Synthesis Measures, and Generative AI Administrative Measures, regulators are now shifting from principles to mechanisms-and from spot checks to routine enforcement.

Key Developments in 2025

1) Institutional framework: stronger rules and clearer tooling

Content labelling. The CAC's Measures for Labeling AI-Generated Synthesized Content require both explicit labels (visible notices) and implicit labels (metadata). National standards, including GB 45438-2025, set detailed methods for text, audio, image, video, and virtual scenes. Platforms must also verify labels and add explicit marks where content is declared or suspected to be AI-generated. Since September 1, 2025, major platforms have rolled out compliance programs and launched cross-platform verification alliances.

Protection of minors. Authorities and industry groups released guidance for K-12 AI use, ethical principles, and governance norms. Themes include separate consent for biometric editing, limits on profiling, safer defaults, and content controls.

Security incident response. National cybersecurity incident rules now set tighter reporting clocks. Most network operators must report major incidents to their provincial CAC within four hours. For generative AI, TC260's emergency response guide classifies incidents by severity and requires immediate reporting when thresholds are met (e.g., repeated severe incidents, large user impact, or material social harm).

Science and technology ethics. Draft Measures on AI ethics management introduce structured reviews across bias, robustness, logging, and more. An annex lists AI activities requiring expert re-review (e.g., human-machine integration, public opinion guidance, highly autonomous decision systems). The list is designed to update over time.

2) Enforcement: from campaigns to routine supervision

National campaigns. The "Qinglang" series expanded to algorithms, deep synthesis, data security, and AI misuse. One track targeted ranking manipulation, discriminatory pricing, and information cocoons. Another focused on AI misuse at the source (unfiled models, risky features, illegal tutorials) and in application (rumors, pornographic content, impersonation, public opinion manipulation, and harms to minors).

Regional actions. Local CACs removed unregistered tools and tutorials at scale, took down non-compliant AI agents, and summoned operators for content and filing violations. Typical penalties included content takedowns, service suspension, rectification orders, and account closures.

3) Judicial trends: IP, personality rights, and platform duties

  • Training or templates without permission. In Song v. Nanjing Technology Co., Ltd., a face-swap service used the plaintiff's short videos as templates without authorization. The court found audiovisual works protection and held the platform liable for failing to review content.
  • AI-generated voice + portrait. In Li v. Cultural Media Co., Ltd., an ad used the plaintiff's public portrait with an AI voice closely resembling his. The court recognized voice as a protectable personality right and found joint liability for inadequate review of promotion.
  • Contributory infringement vs. unfair competition. In a case over Ultraman-like images generated on a training platform, the court found contributory infringement (insufficient care while profiting) but no unfair competition, noting the tool's technological neutrality and the absence of market disruption intent.

Courts are pressing platforms to exercise reasonable review and warnings while avoiding liability theories that would choke off legitimate innovation.

Key Compliance Obligations - Q&A

1) Content labelling

Where is labelling required?

  • Implicit labels (mandatory): Embed metadata in files containing AI-generated or synthesized content. Digital watermarks can be added as needed.
  • Explicit labels (visible): Required for services likely to cause confusion or misidentification, including: chat/text generation or editing; voice synthesis/cloning; face image/video generation, swapping, or editing; pose manipulation; immersive/hyper-real scenes; text-to-image; music/audio generation; text-to-video/image-to-video; and other services that generate or materially alter content.

How to label (explicit): Refer to GB 45438-2025 for placement and format.

  • Text: Add notices at the beginning, end, or inline; interface labels are acceptable.
  • Audio: Spoken notices or rhythmic cues; display labels in the interface.
  • Image: Visible text prompts; text height ≥ 5% of the image's shortest side.
  • Video: Labels on the opening screen and during playback; add end/mid labels where appropriate.
  • Virtual scenes: Labels at session start and, where needed, during the session.

How to label (implicit): Embed metadata fields such as content type, provider name/code, and content ID. Follow TC260 practice guides for text, image, audio, video, coding rules, and metadata protection.

What about distribution platforms? They must verify implicit labels for all distributed content. If content is declared or suspected to be AI-generated, add prominent explicit labels.

Can users request unlabelled outputs? Yes, for cases where visible marks would defeat the use case (e.g., commercial design). Providers must: (1) specify the user's labelling responsibilities in the user agreement; and (2) retain application information and logs for at least six months.

Why it matters in court? Labelling, warnings, and complaint channels are part of "reasonable care." Failure to implement them has already been weighed against platforms in infringement disputes.

2) Algorithmic rule disclosure

What must be disclosed? Under the Algorithm Recommendation Measures, providers must publicly disclose the basic principles, purposes, and main operating mechanisms of recommendation algorithms in simple, clear language.

How to disclose:

  • Dedicated pages: Detailed explainers with visuals and models.
  • Privacy policy sections: Plain-language summaries of logic, use cases, and objectives.

Penalties for non-disclosure: Warnings, public notices, rectification orders, possible suspension of information updates, and fines (RMB 10,000-100,000).

3) Algorithm and AI filing/registration

Internal-only use: If the AI is used solely by employees and outputs stay internal, filing is generally not required.

Calling a third-party LLM via API for public services: If the underlying LLM is already filed, your product typically completes a simpler registration with the provincial CAC (instead of a full filing). Algorithm filing may still apply.

What to prepare (typical):

  • Algorithm filing: Service info, algorithm type, self-assessment, intended disclosures, internal governance policies, responsibility implementation report, and security self-assessment.
  • LLM filing (for providers): Application form, security self-assessment, service agreement, annotation rules, keyword list, evaluation/testing questions, plus API or test accounts for review.

Non-compliance: Warnings, public notices, rectification orders, suspension of information updates, and fines. Expect checks during "Qinglang" actions and daily supervision.

4) Personal information protection

Baseline. AI providers must comply with the PIPL, Cybersecurity Law, and Data Security Law. Deep synthesis providers offering biometric editing must inform targets and obtain separate consent.

Enforcement examples. Penalties have been issued for voice cloning without consent and for failing to conduct Personal Information Protection Impact Assessments (PIPIAs) when building AI datasets containing biometric data.

When is a PIPIA required?

  • Processing sensitive personal information (e.g., biometrics).
  • Automated decision-making or personalized outputs.
  • Entrusted processing, sharing with another controller, or public disclosure.
  • Cross-border transfers.
  • Other processing with material impact on individual rights.

What to assess: Lawful purpose/necessity, impact on rights and risks, and whether safeguards are effective and proportionate. Retain PIPIA and related records for at least three years. Reassess when major product or scale changes occur.

Civil exposure. In a face-swap template case, the court declined a portrait claim but found infringement of personal information rights because the operator collected and analyzed personal data without consent.

Action list for legal teams

  • Map AI products: classify models, features, media types, user groups, and distribution channels.
  • Implement dual labelling: explicit + metadata. Build verification on upload and distribution.
  • Publish algorithmic rule disclosures and keep them understandable.
  • Confirm filing vs. registration needs; prepare self-assessments and governance policies.
  • Stand up PIPIA workflows for training, service delivery, sharing, and cross-border scenarios.
  • Harden incident response: 4-hour reporting readiness, severity classification, playbooks, and duty rosters.
  • Set minors' safeguards: default restrictions, biometric consent flows, and stronger content review.
  • Document "reasonable care": risk warnings, complaint channels, takedown SLAs, and audit trails.

Select references

Optional resources for team upskilling

If your legal or compliance team is formalizing AI review processes, consider structured training to accelerate baseline knowledge across roles.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide