China to tighten AI oversight by amending the Cybersecurity Law
China is set to update the Cybersecurity Law to address artificial intelligence safety and governance. A draft amendment introducing AI-specific provisions will be reviewed by the Standing Committee of the National People's Congress (NPC) in its upcoming session. The goal is clear: support development while enforcing guardrails that keep systems beneficial, safe, and fair.
Officials highlighted strong progress in core digital capabilities since the law took effect in 2017, alongside growing exposure to cybercrime and new risk vectors created by advanced AI. The amendment seeks to bring AI under a clearer, enforceable framework without stalling innovation.
What the draft adds
- An AI-focused provision to balance technological progress with security oversight.
- Policy support for foundational AI research and key algorithm development.
- Upgrades to AI infrastructure requirements and ethical standards.
- Stronger monitoring, assessment, and regulation of AI safety risks.
- Closer alignment with the Civil Code and the Personal Information Protection Law (PIPL).
Legislative calendar
The NPC Standing Committee will meet in Beijing from Friday to Tuesday to review the draft amendment. Lawmakers will also consider revisions to the Organic Law of the Villagers' Committees and the Organic Law of the Urban Residents' Committees, including enhanced community responsibilities (elderly care, women's support, and left-behind children) and better dispute mediation in property management. Additional items include a draft amendment to the Environmental Protection Tax Law and a draft law on procuratorial public-interest litigation.
What this means for in-house counsel and compliance leaders
Expect heightened scrutiny on how AI systems are built, assessed, deployed, and monitored. Legal teams should prepare for clearer expectations around risk management, privacy alignment, and accountability across the AI lifecycle.
- Inventory and classify AI use cases by risk; document data sources, training sets, and model purposes.
- Stand up an AI risk assessment program: pre-deployment testing, red-teaming where relevant, human-in-the-loop controls, and incident response plans for model failures or misuse.
- Tighten privacy compliance under PIPL: data mapping for training/inference, purpose limitation, minimization, lawful basis, consent where required, and DPIAs for high-risk use.
- Contract for accountability: vendor diligence, audit rights, security obligations, data localization, model update/change notifications, and clear liability for safety defects.
- Governance and documentation: model cards or equivalent summaries, evaluation reports, decision-logging for high-impact systems, and traceability of algorithm updates.
- Ethics and oversight: cross-functional AI committee (legal, security, data, product), escalation thresholds, and regular training tied to policy.
- Operational readiness: monitoring for harmful outputs, abuse prevention, rate limiting, and clear takedown/remediation procedures.
Privacy and civil liability alignment
Because the draft aims to sync with the Civil Code and PIPL, expect closer links between AI risk controls, personal data processing rules, and civil responsibility. This likely raises the bar for transparency, consent management, data subject rights handling, and evidence of due care in AI design and deployment.
Action plan now
- Map your AI systems and vendors; identify gaps against current cybersecurity and privacy controls.
- Establish a single policy for AI development and procurement that integrates security, safety, and PIPL obligations.
- Pilot a lightweight conformity file per AI system: purpose, data, testing, safeguards, monitoring, and contacts for regulators.
- Run a tabletop exercise for an AI safety incident to validate escalation and remediation flows.
For official updates and texts, monitor the National People's Congress and the Cyberspace Administration of China:
Your membership also unlocks:
 
             
             
                            
                            
                           