WHO bets on AI to build a stronger evidence base for traditional medicine
The World Health Organization plans to scale up the evidence base for traditional medicine using AI, modern data tools, and increased investment. Shyama Kuruvilla, who leads the WHO Global Centre for Traditional Medicine, outlined a broad research agenda that spans African healing practices, Ayurveda, acupuncture for migraines, and meditation. The goal is clear: move historic practices from "promising" to "proven" (or "ruled out") with rigorous, transparent data.
Kuruvilla noted that weak evidence has kept many practices outside official recognition. Larger, coordinated studies and technology-enabled evaluation can change that. As she put it, "If the evidence changes, we are obliged to be open to it."
Scope, guardrails, and what's not included
The WHO emphasized that its scope covers traditional systems with deep cultural roots. Homeopathy does not meet WHO's definition of traditional medicine, and current evidence for its effectiveness is unconvincing. That line matters for teams planning portfolios, trials, and compliance.
The strategy also stresses ethics, environmental considerations, and community engagement. The mandate: respect cultural diversity, keep communication transparent, and make safety non-negotiable.
Global summit and a 10-year strategy
The WHO Global Summit on Traditional Medicine ran December 17-19 with the Government of India, where the organization presented its Global Strategy on Traditional Medicine for 2025-2034. The core message: responsibly apply AI and new tools to evaluate traditional practices at scale, with fairness and rigor. As WHO leadership highlighted, this is about pairing millennia of practice with modern science to advance health for all.
For primary sources, see WHO's work on traditional medicine and definitions: WHO: Traditional, Complementary and Integrative Medicine. Learn more about the Global Centre's mission here: WHO Global Centre for Traditional Medicine.
Why this matters for product, science, and research teams
Evidence-grade evaluation opens new product categories, clearer regulatory paths, and better clinical integration. Teams that build reliable data pipelines, metrics, and validation frameworks will shape the market. Here's what to prioritize.
Data, measurement, and AI stack
- Define outcome taxonomies across conditions (pain, sleep, mental health, inflammation) and align with clinical endpoints.
- Instrument for real-world evidence: sensors, ePROs, EHR integrations, and longitudinal cohorts. Favor pragmatic trials where appropriate.
- Use NLP to analyze multilingual literature, practitioner notes, and community knowledge. Build knowledge graphs to connect claims, contexts, and outcomes.
- Adopt FAIR principles, data lineage, and reproducible pipelines. Consider federated learning for data sovereignty and privacy.
- Create transparent model cards, clear uncertainty reporting, and bias audits (demographics, region, practice type).
Clinical rigor and safety
- Pre-register protocols. Run randomized, hybrid, or stepped-wedge designs where feasible; justify alternatives when not.
- Standardize adverse event capture and monitoring. Build alerting and human-in-the-loop review for safety signals.
- Distinguish between mechanisms, associations, and outcomes. Use causal inference where data allow; state limits plainly.
- Plan for replication and independent verification. Make negative results visible to reduce publication bias.
Ethics, equity, and benefit-sharing
- Co-develop studies with practitioner communities. Align on consent, data rights, and IP up front.
- Implement benefit-sharing models for communities whose knowledge contributes to products or publications.
- Set governance for AI explainability, patient communications, and culturally sensitive guidance.
- Include environmental impact in your evaluation criteria where ingredients and supply chains are relevant.
Regulatory and market readiness
- Map claims to regulatory pathways early (wellness vs. medical device vs. therapeutic). Don't overstate effects.
- Create post-market surveillance plans with continuous evidence updates and re-calibration triggers.
- Establish health economics models (QALYs, absenteeism, caregiver burden) to justify reimbursement and adoption.
- Design for clinician workflow: referral, documentation, and patient education materials.
A practical checklist to act now
- Pick 2-3 conditions with high disease burden and data availability; define measurable outcomes and time horizons.
- Secure partnerships with accredited practitioners and clinics for diverse, representative cohorts.
- Stand up a privacy-first data platform with audit trails, consent management, and federated options.
- Publish your evaluation rubric and reporting templates; invite external review to build trust.
- Commit to a living evidence program: quarterly updates, replication targets, and public change logs.
Related signals worth tracking
- OpenAI tools are helping startups ship faster across healthcare and finance-expect shorter cycles from hypothesis to pilot.
- Nurabot, an autonomous assistant for hospitals, targets a 20-30% reduction in nurse workload by 2026-an indicator of where AI is relieving operational strain.
Skills and resources
If your team is standing up AI evaluation pipelines or health data products, structured training accelerates delivery. Explore role-based learning paths here: Complete AI Training: Courses by Job.
Your membership also unlocks: