AI for Health, Not Healthcare for AI: India's Shift from Hype to Homework on Equity and Governance

India's health leaders are turning down hype and asking harder questions about AI: who benefits, who's left out, and who's accountable? Equity, validation, and scale come first.

Categorized in: AI News Healthcare
Published on: Jan 11, 2026
AI for Health, Not Healthcare for AI: India's Shift from Hype to Homework on Equity and Governance

AI in Healthcare: Governance, Equity, and Responsible Innovation in India

AI promises faster diagnoses and broader access. But the bigger question is harder: who benefits, who gets missed, and how do we govern systems we don't fully understand yet?

That question took center stage at the inaugural Winter Dialogue on RAISE (Responsible AI for Synergistic Excellence in Healthcare) at Ashoka University, hosted by the Koita Centre for Digital Health with NIMS Jaipur, WHO SEARO as technical host, ICMR-NIRDHS, and the Gates Foundation. As an official Pre-Summit event of the AI Impact Summit 2026 and the first of four national RAISE dialogues this month, the focus was clear: Health AI policy and governance.

From pilots to public systems

Dr Karthik Adapa (WHO) called out "pilotitis" - projects that never leave the sandbox. Frameworks like SALIENT matter because they force teams to think beyond model performance into integration, evaluation, procurement, and long-term support inside public programs.

Bottom line for health leaders: don't fund models without a plan for scale, monitoring, and ownership.

Accuracy vs equity: what are we willing to trade?

Dr Anurag Agrawal posed a blunt question: would you pick higher average accuracy if it fails women, or settle for lower accuracy if outcomes are fair? His message became a refrain: "AI for Health, not Healthcare for AI."

Equity is not a footnote. If subgroup performance isn't measured, inequity is guaranteed.

Where the cracks show

Case studies across tuberculosis screening, cancer detection, and maternal health showed promise - and fragility. Data pipelines are brittle, infrastructure is uneven, regulation is unclear, and social bias seeps into labels and outcomes.

Mental health drew the strongest caution. As Dr Prabha Chand noted, large language models are "optimized for engagement, not clinical outcomes." Dr Smruti Joshi added, "mental health judgment cannot be fully automated." The role of AI here must be narrow, auditable, and always supervised - especially for vulnerable groups.

Validation and accountability by default

"Imperfect data produces imperfect models," said Dr Mary-Anne Hartley. In a country as diverse as India, external validity can't be assumed. Continuous monitoring, bias mitigation, and human-in-the-loop checks need to be standard practice, not optional extras.

For high-level guidance, see WHO's recommendations on ethics and governance of AI for health: WHO guidance. Regional context on digital health is also available via WHO SEARO.

What healthcare leaders can do now

  • Set explicit equity targets. Track performance by sex, age, geography, language, and socioeconomic status. Don't accept "overall accuracy" as success.
  • Prefer models you can explain and audit. A simpler system that treats patients fairly beats an opaque one that doesn't.
  • Fix the data layer. Improve consent, documentation, and data quality. Ensure datasets represent the people you serve.
  • Kill "pilotitis." Budget for scale-up: infrastructure, integration, training, change management, and maintenance before you start.
  • Build human-in-the-loop workflows. Define who reviews, who overrides, and how escalation works in real time.
  • Validate across sites and populations. Re-test after updates. Monitor for drift and bias over time, not just at launch.
  • Treat mental health with extra caution. Limit AI to screening, triage, and documentation support with clear guardrails.
  • Assign accountability. Name clinical owners, data stewards, and safety officers. Write it into SOPs and contracts.
  • Align with public health programs. Co-design with state and national bodies to meet real service delivery needs.

Building the muscle for responsible AI

As Vice-Chancellor Somak Raychaudhury put it, responsible AI in health can't be built in silos. Universities have to advance research and the institutional infrastructure that enables public-good outcomes at scale.

RAISE, described by Aradhita Baral as "a platform for sustained dialogue," now expands to IIT Delhi, Bengaluru, and Hyderabad. The shift is encouraging: less hype, more homework. Policy, equity, and operational discipline are the real work.

If your team is upskilling for evaluation, bias testing, and safe deployment in clinical settings, explore curated role-based learning here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide