AI Governance Needs More Than a U.S.-China Deal
Unregulated A.I. is risky, but a U.S.-China duopoly erodes trust and splinters standards. Build an inclusive, testable compact with shared safety bars, audits, and open evals.

Could China Be a Partner in A.I. Evolution? Build Governance That Works for Everyone
The risks from unregulated A.I. are real. But handing governance to a U.S.-China duopoly is a shortcut that will backfire.
Many global institutions were built by and for Western interests. That perception has cost them trust across the global south. If we repeat that pattern with A.I., we'll get weak compliance, fragmented standards, and higher systemic risk.
The Problem With a Two-Player Game
- Standards reflect great-power priorities, not broad public interest.
- Geopolitics bleeds into safety, throttling open research and credible oversight.
- Developers face a patchwork of rules that slow deployment and raise costs without improving outcomes.
- People outside core blocs get tools later and with fewer protections.
An Inclusive A.I. Compact: What It Should Include
- Common safety thresholds for high-risk use (bio, cyber, critical infrastructure, elections).
- Independent testing labs with legal access to evaluate frontier systems before wide release.
- Model and compute registries for systems above defined capability or training FLOP thresholds.
- Incident reporting that mirrors aviation: near-miss logs, root-cause analyses, and public summaries.
- Data provenance and usage disclosures: source classes, jurisdictions, consent, and license status.
- Shared red-team libraries and standardized evals for misuse, discrimination, security, and content safety.
- Cross-border audit protocols so trust doesn't depend on bilateral politics.
Who Gets a Seat
- Governments across regions (AU, ASEAN, CARICOM, Mercosur, African Union, EU, GCC).
- Standards bodies and labs (ISO/IEC, national standards agencies, accredited safety labs).
- Civil society, labor, and communities most affected by deployment.
- Open-source communities and academic consortia, not just large vendors.
- Industry across sizes: startups, open-model stewards, and major platforms.
Practical Steps You Can Use Now
- Adopt the NIST AI Risk Management Framework for build, test, deploy, and monitor phases (NIST AI RMF).
- Ship model cards, data statements, and clear capability limits with every release.
- Threat-model misuse like you threat-model security: attack trees, red-teams, and tracked mitigations.
- Use gated access for powerful features; require verified use cases for dual-use tools.
- Log eval coverage (safety, fairness, security) and link it to go/no-go decisions.
- For public-sector procurements: require pre-deployment impact assessments and independent audits.
Where the U.S. and China Fit
Both should participate. Neither should dominate. Their labs, chips, and research capacity matter, but legitimacy depends on shared rulemaking, open evaluation, and reciprocal access for auditors.
That means publishing test results, allowing third-party probes, and contributing to common safety libraries without strings attached.
Metrics That Matter
- Reduction in critical incidents and near-misses per deployment hour.
- Safety filter quality: false positive/negative rates across languages and contexts.
- Access equity: latency, cost, and feature parity across regions and languages.
- Model transparency: proportion of releases with full documentation and evals.
- Sustainability: training and inference energy per task, reported and verified.
Funding and Incentives
- Safety bounties and recurring grants for independent red-teams and open benchmarks.
- Compute credits for accredited research orgs in underrepresented regions.
- An "alignment levy" on frontier-model revenue to fund cross-border safety labs and audits.
- Public-good datasets with clear licenses and privacy controls.
Policy Priorities for Governments
- Risk-tiered rules: higher scrutiny for higher capability or deployment scale.
- Interoperable standards tied to procurement to drive adoption across vendors.
- Data rights enforcement: consent, opt-out, and redress that actually work.
- International incident exchange with legal safe harbor for good-faith reporting.
What Builders and Researchers Can Do Next
- Benchmark against agreed safety suites before every major release.
- Publish capability overviews with explicit out-of-scope uses and mitigations.
- Adopt staged rollouts with kill switches and rollback plans.
- Open-source eval tooling where possible to make results reproducible.
Credible References
For practical frameworks and shared language, see the OECD AI Principles (OECD) and the NIST AI RMF (NIST). Both are vendor-neutral and actionable.
Skill Up Your Team
If you're rolling out A.I. in government, IT, or research, align training with real job tasks. A curated starting point: AI courses by job role.
Bottom line: A stable A.I. future won't be negotiated by two capitals. Build an inclusive, testable governance compact, tie it to real incentives, and measure what matters. Everyone gets safer, faster progress-and more trust in the process.