China resets its path to AI governance: less statute now, more standards and pilots
China has pulled a comprehensive AI law from the 2025 legislative plan. Instead, regulators are doubling down on pilots, standards and targeted rules to keep pace with a fast-moving technology.
The trade-off is clear. Flexibility rises, but so do compliance costs and uncertainty for companies working across fragmented frameworks.
Why the pause - and what it signals
Officials and state media continue to signal that a high-level AI law is coming. Legal Daily has argued that dedicated legislation is still needed to address AI-specific risks like algorithmic bias and discrimination.
For now, policymakers appear to be buying time to learn from pilots and international practice before locking in statutory requirements.
What governs AI in China today
AI use is currently steered by existing statutes, industry standards and sector rules. This patchwork creates friction where new measures collide with established laws.
Example: Shanghai's AI industry regulation aims to expand access to public data for model training. It's not clear how that squares with the Personal Information Protection Law's consent rules or public-interest bases, especially when training sets blend data collected under different conditions.
Another pain point is explainability. Some measures require firms to explain how systems work, while trade-secret and security rules limit what can be disclosed. That tension drives up costs, especially for smaller firms without large compliance teams.
The coordination gap
People's Daily has stressed the need to balance development and security. The missing piece is a coordinating statute that sets uniform baselines and reconciles conflicts.
Such a law could define tiers of model risk, set minimum safety testing and bias evaluation thresholds, and standardize incident reporting. It would also lower compliance costs by aligning sector measures with national requirements.
What other jurisdictions are doing
The European Union's tiered approach offers strong safeguards and legal certainty, but it comes with heavy compliance burdens that larger firms can absorb more easily than SMEs. See the EU AI Act.
Japan leans principle-first, closer to China's pilots-and-standards path but with less leverage. For high-level principles, the OECD AI Principles remain a useful reference point.
What to expect next in China
Expect incremental moves: more targeted measures, refined security reviews and expanded pilots in areas like healthcare and smart cities. Standard-setting bodies will shape model evaluation, watermarking, data governance and cybersecurity testing.
Major hubs-Shanghai, Beijing and Shenzhen-will continue to act as testbeds for data access, public procurement of AI tools and regulatory supervision models.
What could catalyze a comprehensive law
A serious incident could speed things up. Model collapse, systemic vulnerabilities in widely deployed systems or high-profile AI-enabled fraud could expose limits in current rules and harden public demand for an umbrella statute.
China has moved this way before. A landmark fraud case accelerated privacy legislation last time; a major AI failure could do the same here.
Practical steps for regulators and in-house counsel
- Map your obligations: crosswalk PIPL, the Cybersecurity Law, the Data Security Law and sector measures against AI use cases.
- Track local pilots: monitor municipal rules (e.g., Shanghai) for data access and procurement models that could affect national practice.
- Standardize model risk tiers: define documentation, testing and sign-off requirements by risk level; align with emerging national standards.
- Tighten data provenance: record consent bases, collection contexts and data-sharing terms; keep verifiable audit trails across datasets.
- Pre-plan explainability: set disclosure playbooks that meet transparency requirements without breaching trade-secret or security constraints.
- Strengthen incident response: create AI-specific triggers, reporting timelines and remediation steps that can plug into a future statute.
- Engage standard-setters: participate in technical committees to shape testing, watermarking and security baselines.
- Support SMEs: encourage shared testing infrastructure, templates and safe-harbor pilots to reduce compliance overhead.
Bottom line
China is trading speed of passage for flexibility and learning. The path now runs through pilots and standards, with a coordinating statute likely later.
For legal and government teams, the job is to reduce ambiguity, prepare for convergence and build modular programs that can absorb a comprehensive law when it arrives.
If your team needs practical upskilling for AI risk and compliance work, explore curated options by role here: AI courses by job.
Your membership also unlocks: