AI Risks a New Era of Divergence: What IT and Development Leaders Need to Do Now
A new UNDP report warns that unmanaged AI could widen inequality between countries. The gaps won't just be economic-they'll show up in skills, infrastructure, and governance. The starting points are uneven, and without firm action, decades of development convergence could erode.
Asia and the Pacific sits at the center of this shift. The region accounts for over half of global AI users, with China nearing 70% of global AI patents and over 3,100 newly funded AI companies across six economies. The upside is big: roughly +2 percentage points to annual GDP growth and up to 5% productivity gains in sectors like health and finance. ASEAN economies alone could add nearly $1 trillion in GDP over the next decade.
The risks are just as real. Women's jobs are nearly twice as exposed to automation. Youth employment is already slipping in high-AI-exposure roles, especially ages 22-25. In South Asia, women are up to 40% less likely to own a smartphone, and rural and indigenous communities often don't appear in the datasets that train AI models-raising bias and exclusion risks.
The fault line is capability
"The central fault line in the AI era is capability," said Philip Schellekens, UNDP Chief Economist for Asia and the Pacific. "Countries that invest in skills, computing power and sound governance systems will benefit, others risk being left far behind."
Digital readiness varies widely. Singapore, South Korea, and China are investing heavily in compute, infrastructure, and talent. Others are still building basic connectivity and digital literacy. These starting positions will define who captures value-and who bears the downside.
Where gaps widen
- Compute and infrastructure: Limited access to GPUs, reliable power, and cooling caps what can be built and tested. Energy and water demands from AI-intensive systems add pressure.
- Data and connectivity: Sparse, skewed, or low-consent datasets lead to exclusion and bias-especially for rural and indigenous communities.
- Skills and institutions: Shortages in MLOps, data engineering, safety evaluation, and cybersecurity slow adoption and raise risk.
- Governance: Few countries have comprehensive AI regulations. By 2027, over 40% of AI-related data breaches may stem from misuse of generative AI.
Public sector use cases show the upside
Bangkok's Traffy Fondue has processed nearly 600,000 citizen reports, speeding up fixes to everyday issues. Singapore's Moments of Life reduced new-parent paperwork from about 120 minutes to 15. In Beijing, digital twins support urban planning and flood management. These examples prove the value of targeted, well-governed AI in core services.
12-month action plan for CIOs, CTOs, and dev leaders
- Run an AI exposure map: Inventory processes and roles. Flag high-automation tasks, especially those heavily staffed by women and young workers. Pair automation with upskilling and redeployment paths-not just headcount cuts.
- Build a compute strategy: Decide what runs on-prem vs. cloud. Use job queues and cost controls for GPU bursts. Track energy and water use; set efficiency targets and supplier SLAs for sustainability.
- Upgrade data pipelines: Prioritize high-value local datasets with clear consent. Close coverage gaps for rural and indigenous populations. Use bias checks at ingestion and model evaluation. Treat synthetic data carefully-document provenance and limits.
- Ship with safety by default: Classify data sensitivity, apply retrieval-augmented generation for protected content, log prompts and outputs, and red-team models before going live. Align to frameworks like the NIST AI Risk Management Framework.
- Stand up lightweight governance: Create a cross-functional review gate for AI features (security, legal, data, product). Define model evaluation criteria, incident response, and user reporting channels. Publish model cards and data sources where feasible.
- Deliver fast public-value pilots: Citizen issue reporting, life-event bundles (birth, licensing, benefits), and city digital twins are practical starting points. Treat them as reference architectures for scale-out.
- Close the skills gap: Budget for MLOps, data engineering, privacy engineering, safety evaluation, and prompt engineering. Track skill coverage like uptime-no single points of failure.
- Procure responsibly: Require energy, water, and safety disclosures from vendors. Ask for eval results on bias, toxicity, and security. Include auditability and exit clauses.
- Measure outcomes, not hype: Tie AI projects to service quality, time-to-decision, error rates, inclusion metrics, and job transitions-not just model accuracy. Review quarterly.
- Form regional alliances: Pool compute, share benchmarks and safety evals, and co-invest in open datasets that reflect local languages and contexts.
Asia-Pacific: growth with guardrails
The upside is significant: higher productivity in health and finance, faster public services, and substantial GDP gains. But the benefits depend on capability building-compute, data quality, skills, and governance-executed together. Ignore any one of these, and the gap widens.
What leaders are saying
"AI is racing ahead, and many countries are still at the starting line," said Kanni Wignaraja, UN Assistant Secretary-General and UNDP Regional Director for Asia and the Pacific. "The Asia and Pacific experience highlights how quickly gaps can emerge between those shaping AI and those being shaped by it."
Level up your team's skills
If you need curated learning paths for data, MLOps, safety, and AI for specific job roles, explore targeted options here: AI courses by job role.
Further reading
Your membership also unlocks: