China and the United States Can Compete-and Still Cooperate on AI Safety
Both countries want to lead in AI. Competition is healthy. But some risks are so big that failure in one country spills over to everyone else. That's where focused cooperation makes sense.
High-stakes threats-models that help design biological agents, automate cyberattacks, or enable mass social manipulation-don't respect borders. Managing these risks requires shared guardrails and fast, trusted channels for technical coordination. We've done this before: US-Soviet nuclear safety collaboration proved that rivals can still reduce shared risks without giving up strategic advantage.
Where cooperation delivers value
- Joint red-team testing: Run stress tests on foundation models using common playbooks and neutral facilities. Share results at the level of risks and mitigations, not trade secrets.
- Incident reporting: Create a cross-border, anonymized clearinghouse for AI incidents and near misses with a standard severity scale. The goal is fast learning, not blame.
- Pre-deployment checks: Agree on evaluation suites for bio, cyber, and model security before major releases or capability jumps.
- Compute/run disclosures: Voluntary reporting for unusually large training runs, including safety controls, to flag operations that need extra scrutiny.
- Research on technical safeguards: Co-author methods for model containment, provenance, and abuse prevention that anyone can implement.
What governments can do now
- Stand up a technical safety channel: A joint working group focused on evaluations, incident response, and secure data exchange-separate from broader political talks.
- Fund pre-competitive labs: Support independent testbeds where researchers from both countries can run agreed safety tests under clear rules.
- Adopt common frameworks: Map cooperation to the NIST AI Risk Management Framework and compatible international practices.
- Create an incident clearinghouse: Back a neutral platform for confidential submissions and public summaries so lessons spread quickly. Resources like the AI Incident Database can inform the model.
- Set reciprocity rules: Access to joint testing and data flows depends on meeting the same safety obligations on both sides.
How labs and researchers can contribute
- Publish safety cards: Document dangerous capability tests, mitigations, and red-team coverage before release.
- Adopt shared evaluations: Use common benchmarks for biosecurity, cyber misuse, jailbreak resistance, and model provenance.
- Participate in drills: Run cross-organizational exercises for coordinated response to high-severity incidents.
- Share artifacts responsibly: Exchange test suites, not weights; share patterns of failure, not proprietary data.
Scope that protects competition
This is about strategic safety, not market strategy. Cooperation should stick to pre-competitive areas: testing methods, incident response, and safeguards that keep models from being abused. Proprietary models, data, and customers stay off the table. Reciprocity, audit trails, and narrow data-sharing rules keep the focus tight.
What success looks like
- Fewer duplicated safety efforts-and better coverage of edge cases that cause real harm.
- Faster, coordinated responses to serious incidents and near misses.
- Clearer public trust that advanced AI is being managed with care, even as competition continues.
The stakes are high and rising. Compete on capability. Cooperate where failure is unacceptable. That balance keeps innovation moving while reducing the risk of accidents that spill across borders.
If your team is building or overseeing AI systems and needs structured upskilling in safety and evaluations, explore focused options here: AI courses by job.
Your membership also unlocks: