Five Rules for High-Risk AI Projects, From Everest's New Regulations
Nepal's updated climbing regulations for Mount Everest offer unexpected lessons for CIOs managing high-risk AI systems. The new rules-mandatory local guides, GPS tracking, health certifications, prior experience requirements, and waste management-address the same core problem in both domains: managing failure in extreme environments where mistakes carry serious consequences.
The EU AI Act classifies high-risk systems as those affecting health, safety, or fundamental rights. They include biometric systems, critical infrastructure controls, hiring tools, law enforcement applications, and judicial decision-making. The parallels to mountaineering are direct: both require preparation, visibility, expertise, and accountability.
Require proven experience before high-risk work
Everest's new rules demand climbers summit at least one peak above 7,000 meters in Nepal first. That altitude marks the physiological threshold between high and extreme altitude-a critical transition point.
For AI teams, this translates to a ban on shadow AI and sprawl. Teams deploying high-risk systems should have documented success with moderate-risk implementations first. They need to understand the governance requirements they're about to face.
According to KPMG's Q1 2026 AI Pulse Survey, 43% of organizations already restrict autonomous agent decision-making in high-risk areas like core financial or customer-facing workflows. These teams typically require pilot projects with documented safety metrics before full deployment.
Build real-time observability into every project
All Everest climbers now carry GPS tracking chips sewn into their jackets. The purpose is simple: if something goes wrong, rescuers can find you.
High-risk AI systems need the same visibility. Every project should allocate 10 to 15% of budget to observability tools that track agent intent and decision paths in real time. If an agent drifts into non-compliant behavior, teams should detect and pause it before execution.
Without this tracking, CIOs have exposure, not control.
Assemble specialist teams, not generalists
Solo climbing on Everest is now prohibited. Every climber must have at least one certified Nepali guide with local knowledge and safety expertise.
Organizations should move away from generalist AI teams toward hybrid specialists with technical depth, domain knowledge, compliance expertise, and cybersecurity skills. This often means bringing in external partners.
The KPMG survey found 48% of organizations plan to deploy AI agents built by trusted vendors rather than building in-house. Cybersecurity should be integrated early, not added later.
Verify credentials before hiring or engaging partners
Climbers over 50 must submit medical certificates with ECG and stress tests. Younger climbers need basic fitness documentation within 30 days of departure.
In AI, third-party certifications can validate vendor and practitioner capabilities. Organizations like Thinkers360 offer credentials based on an expert's published work and domain experience, not just test scores.
Before deploying high-risk AI, run a formal impact assessment to identify potential harms. Establish incident response plans and liability insurance. Treat it like a medical checkup for your organization.
Account for environmental costs
Climbers must pack out all waste using government-approved biodegradable bags. Nothing stays on the mountain.
Global data center investment will exceed $3 trillion over the next five years to meet AI demand. Some organizations report AI infrastructure costs and emissions doubling month-to-month as pilots expand.
CIOs should work with sustainability teams to set environmental targets for both internal and partner data centers. Look for technologies designed to reduce consumption at the architectural level, not bolted on later.
The pattern is clear
Nepal's new climbing rules prioritize quality over quantity. They require preparation, visibility, expertise, and accountability. The same approach works for high-risk AI.
Organizations that follow these five principles-proven experience, real-time observability, specialist teams, verified credentials, and environmental planning-reduce the chance of failure at critical moments.
For more on managing AI risk and governance, explore AI for Executives & Strategy or review the AI Learning Path for CIOs.
Your membership also unlocks: