AI beyond borders: Practical governance for safer, scalable care
As AI moves deeper into hospitals across China, the central question sounds familiar to anyone in global health: how do we pair speed with safety and keep public trust while we scale?
Christoph Benn, director of the JLI Center for Global Health Diplomacy and chair of the board at Health AI, has spent decades working with governments, UN agencies, and health systems across Asia, Africa, and Europe. His view is clear: the tools we used to manage cross-border health risks - from pandemics to drug resistance - are directly useful for medical AI.
"Artificial intelligence can make healthcare more effective and more efficient. But those benefits are not automatic. AI has to be used responsibly, according to clear guidelines and standards, so that people can trust it," he said.
Innovation with guardrails - what global health already knows
Benn's core message: regulation is not a brake. It's the seatbelt. "It's about making sure systems are safe, validated, and accountable, especially when decisions affect human lives."
The playbook from vaccines and pharmaceuticals still applies. Independent validation, post-market surveillance, clear lines of accountability, and timely recall or rollback when harm signals appear - the same logic fits digital tools.
Hospitals cannot self-police in isolation. Governments need capable regulators, and those authorities need to talk to each other. Shared methods and mutual recognition reduce duplication and raise the floor on safety.
China's edge: scale, speed, and system-level learning
China's public insurance system and centralized planning give it an unusual advantage: the ability to test, adjust, and roll out AI across an entire health system, not just a few flagship hospitals.
From AI-assisted diagnostics to remote surgery and emerging Agent Hospitals, implementation is happening with real patients in routine care. That scale is valuable to the global community - if insights are shared and benchmarks are transparent.
Build institutional capacity, not just algorithms
Benn argues every country needs the ability to assess, certify, and validate the AI tools they use. That capacity cannot be limited to wealthy nations. The infrastructure is as much governance as it is hardware and software.
For ministries, payers, and hospital groups, that means investing in evaluation labs, registries for AI-enabled devices, and rapid feedback loops between clinicians and oversight bodies. It also means teaching teams to read model cards, audit bias, and question vendor claims.
Equity is a design choice
Concern that AI could widen gaps is real. But inequality already exists. The actionable question is whether we build systems that work in low-resource settings - where connectivity is variable, data is thin, and clinical workflows differ.
Shared standards, open evaluation frameworks, and technology transfer can prevent a future where advanced AI is locked inside elite city hospitals. Benn's view is pragmatic: cooperation is no longer optional for any country that wants safe, trusted AI in care.
What healthcare leaders can do now
- Set up an AI oversight committee with clinical, safety, IT, legal, and patient representation. Give it authority to approve, monitor, pause, or decommission tools.
- Adopt a procurement checklist: intended use, performance on local data, bias and drift testing, cybersecurity posture, explainability, incident response, and clear liability terms.
- Require external validation and real-world performance monitoring. Track false positives/negatives, turnaround time, clinician override rates, and patient outcomes.
- Mandate model documentation (data sources, training/validation cohorts, limitations) and refresh schedules. No "black box" deployments for high-impact use cases.
- Pilot in controlled environments before scaling. Use A/B testing or stepped-wedge rollouts with predefined stop criteria.
- Integrate with workflow, not around it. Minimize clicks, surface explanations, and make escalation paths obvious.
- Protect patients: consent where appropriate, clear patient communications, audited data use, and strong security controls.
- Invest in people: clinician training on AI literacy, human factors, and escalation; data teams for continuous monitoring; and simulation drills for failure modes.
- Share learning: contribute de-identified results to registries and collaborate with peer hospitals and regulators to align on metrics and thresholds.
Why cross-border cooperation matters
Threats and opportunities don't respect borders. Common benchmarking methods, reference datasets, and agreed reporting templates make it easier to trust results from elsewhere and speed safe rollout.
Regulators coordinating across countries can reduce redundant reviews and lift standards for all. Health systems can benefit by adopting what's already proven - and by avoiding what isn't.
Global South potential
Benn recalls working in rural Africa where hospitals lacked even computers. Today, with mobile connectivity and AI-supported diagnostics, services once out of reach are becoming realistic. That only works at scale if tools are designed for unstable connectivity, multilingual use, and low-cost hardware - and if training and support are part of the package.
Practical resources
- WHO guidance on ethics and governance of AI for health
- WHO regulatory considerations on AI for health
Next steps for teams
- Clinicians and hospital leaders: audit one live AI workflow this month; publish your monitoring metrics and escalation rules internally.
- Regulators and payers: align on a minimum evidence standard and a fast-track for low-risk tools with strong post-deployment monitoring.
- Vendors: deliver model cards, bias testing, and update policies by default; support external validation on local data before scale.
Keep learning
"None of today's major challenges can be solved by countries acting alone." The path to trustworthy medical AI is shared - and so are the benefits.
Your membership also unlocks: