China’s New AI Safety Association: What It Means for AI Development and Global Collaboration
Since the debut of the DeepSeek-R1 open-source reasoning model in January 2025, China has sharply increased its focus on artificial intelligence (AI) as a core driver of economic growth. The country is pushing for widespread AI adoption and aims for self-sufficiency across the entire AI stack. At the same time, China is starting to address the potential catastrophic risks posed by frontier AI and is opening up to international cooperation on these issues.
A key development in this shift is the February 2025 launch of the China AI Safety and Development Association (CnAISDA, 中国人工智能发展与安全研究网络). This new organization is China’s counterpart to AI safety institutes established recently in countries like the UK and the US. Despite its significance, public information about CnAISDA remains limited. This article breaks down what CnAISDA is, how it was formed, and what it indicates about China's AI policy direction.
What Is CnAISDA?
Function
Currently, CnAISDA’s main role is to represent China in international AI discussions, including collaboration with other AI safety institutes (AISIs). This highlights China’s willingness to engage on frontier AI risks beyond its usual preference for the United Nations framework. Unlike the UK’s and US’s AISIs, CnAISDA is not yet set up to carry out significant domestic tasks such as independently testing or evaluating advanced AI models.
Structure
Rather than creating a new standalone agency, CnAISDA operates as a network, integrating multiple existing Chinese AI-focused institutions. It acts as a coalition, representing China abroad and advising the government without the state having to “pick winners” in the AI policy space. Though its exact relationship with the government is unclear, CnAISDA’s leaders assert it has official support.
Personnel
CnAISDA offers a formal platform for influential experts with strong government and international ties. Key figures include Fu Ying, former vice minister of foreign affairs; Andrew Yao, China’s only Turing Award winner; and Xue Lan, a notable adviser to the State Council. This setup elevates policy entrepreneurs who can influence China’s AI governance both domestically and internationally.
Origins of CnAISDA
CnAISDA is the result of years of strategic efforts by Chinese AI governance leaders who engaged in growing international conversations about frontier AI risks. Interest in AI safety within China began in the late 2010s among a small group of scientists and gained legitimacy through participation in global forums like the 2023 AI Safety Summit in the UK and statements on AI extinction risk by international groups.
Following the establishment of AISIs in the US and UK, a group of Chinese policy entrepreneurs created CnAISDA to engage globally on AI governance while fitting within China’s domestic political framework. This reflects a balancing act between acknowledging AI safety risks and advancing economic growth.
What CnAISDA Signals About China’s AI Policy
The creation of CnAISDA suggests China recognizes the importance of addressing catastrophic AI risks and building international standards. President Xi Jinping’s recent remarks hint that the association may influence higher-level policy thinking and future regulations.
However, challenges remain. Cooperation with the US is uncertain due to differences in focus and political tensions. Within China, the priority remains AI innovation for economic growth, which may limit motivation to rigorously tackle catastrophic risks.
The upcoming Shanghai World AI Conference in July will be an early test of CnAISDA’s influence and China’s commitment to frontier AI safety. Though primarily an international engagement platform for now, CnAISDA could eventually support more robust domestic AI risk management.
For global stakeholders, CnAISDA opens new channels for dialogue and insight into China’s evolving AI governance. It shows how AI safety ideas can spread internationally in various forms, offering alternative paths to reducing shared risks beyond formal treaties.
Introduction: DeepSeek-R1’s Impact and the Need to Understand China’s AI Safety Ecosystem
The release of DeepSeek-R1 marked China’s arrival as a global competitor in frontier AI. The model’s breakthrough spurred Chinese leadership to meet with DeepSeek’s CEO and emphasize rapid AI diffusion and self-sufficiency.
While the drive for AI innovation is clear, questions linger about China’s approach to AI safety and security. How China manages these issues will affect its global competitiveness, military AI capabilities, and the potential for catastrophic AI risks.
This article focuses on the China AI Safety and Development Association, which stands as the country’s answer to the AI safety institutes established worldwide. Unlike the US and UK AISIs, CnAISDA prioritizes international representation over direct domestic AI oversight, leveraging a network of existing institutions rather than creating new bureaucracy.
The association offers a platform for experts with government ties to influence China’s AI safety policies amid a complex balance between innovation and risk management.
The Global Landscape of Frontier AI Risks and Safety Institutes
AI Safety Institutes (AISIs) are government-supported organizations tasked with reducing AI risks, especially catastrophic ones linked to advanced AI systems. These institutes vary widely in scope, with some focusing on immediate concerns like privacy and bias, and others on frontier AI risks such as loss of control scenarios.
Many AISIs, like those in the US and UK, are new government bodies with domestic evaluation roles. CnAISDA, however, follows a model seen in Canada, France, and India by forming a coalition of existing institutions.
Its focus on catastrophic risks is notable given emerging evidence from AI developers about troubling behaviors in advanced systems, including deception and evasion of safety controls. These challenges complicate efforts to measure and mitigate such risks.
Institutional Design and Key Actors in CnAISDA
CnAISDA’s network includes prestigious universities like Tsinghua, government research centers such as the Beijing Academy of Artificial Intelligence (BAAI), and groups under the Ministry of Industry and Information Technology (MIIT). This structure avoids creating new bureaucracy and gives the Chinese Communist Party (CCP) flexibility in AI governance.
Rather than controlling domestic AI developers through binding safety requirements, CnAISDA consolidates expertise to present a unified front internationally while allowing innovation to proceed freely at home.
International Engagement as a Priority
CnAISDA’s main goal is to centralize China’s international AI safety engagement. Unlike US and UK AISIs, which combine domestic testing with global dialogue, CnAISDA focuses on representing China abroad without assuming domestic regulatory roles.
This shift also signals China’s willingness to engage on frontier AI risks outside the UN system, a departure from its traditional diplomatic approach. The association’s launch at the Paris AI Action Summit in February 2025 reinforced this international orientation.
Core Institutions in China’s Networked Approach
The association brings together China’s top AI risk-focused bodies, including:
- China Academy of Information and Communications Technology (CAICT)
- Shanghai AI Laboratory
- Beijing Academy of Artificial Intelligence (BAAI)
- China Center for Information Industry Development (CCID)
These institutions conduct various AI safety evaluations, from testing outputs that affect China’s national image to assessing catastrophic risks like AI-assisted development of dangerous substances. However, the depth and transparency of these evaluations vary.
CCID’s role is intriguing as it mainly supports industrial development and military-civilian innovation rather than AI policy specifically. Its presence may balance the safety-focused institutions with development interests within CnAISDA.
The association’s impact will likely come from the increased influence of its member experts and institutions rather than from new regulatory powers.
Elevating Experts Connected to the Government
CnAISDA formalizes and amplifies the voices of established AI experts with strong government links. This approach provides these experts with greater visibility in international AI governance without creating new bureaucratic layers.
Tsinghua University stands out as the intellectual and organizational center of CnAISDA. Its address and contact point are listed as Tsinghua, and many association experts hail from there. Given Tsinghua’s close ties to the CCP and its leading role in STEM fields, the university’s central role reflects the party’s method of channeling policy ideas through trusted academic institutions.
Looking Ahead
The formation of CnAISDA marks an important step for China’s AI safety ecosystem and global AI governance. It creates a platform for China to participate in international AI safety discussions without immediately imposing domestic restrictions that might slow innovation.
The balance between economic growth and AI safety will be tested in the months ahead, especially at the Shanghai World AI Conference. Observers should watch for concrete commitments on frontier AI safety and whether CnAISDA gains influence over China’s domestic AI policies.
For IT and development professionals, staying informed about China’s evolving AI safety landscape is crucial. It affects global standards, competitive dynamics, and the overall risk environment for advanced AI technologies.
To deepen your expertise in AI and stay ahead in this shifting landscape, consider exploring comprehensive AI courses and training available at Complete AI Training.
Your membership also unlocks: