AI in Indian healthcare: promise, limits, and the path that serves people
As India hosts high-profile AI events, a quieter but more grounded conversation is emerging in healthcare. In Delhi, a national consultation on People-led AI in Health pushed past the hype and asked a harder question: who does AI serve - patients and providers, or centralised commercial systems?
The consensus was clear. AI can help in narrow tasks, but when deployed through top-down, data-hungry platforms, it can sideline rights, worsen inequities, and add risk for patients and health workers. We need a health-systems approach that starts with rights, strengthens public care, and keeps humans firmly in the loop.
Where AI has shown value - and where it falls short
There is real promise in image recognition for radiology, triage analytics in controlled settings, and workflow assistance. Yet tools that look strong in pilots often underperform in live clinical environments. Healthcare is not just pattern-matching - it's clinical judgment, context, explanation, and care. That is human work.
Evidence also points to uneven generalisation across populations and settings. Strong guardrails and continuous, real-world evaluation are non-negotiable. For context on ethical boundaries, see the WHO's guidance on AI for health here.
Anchor AI in rights, not just datasets
AI in health must be rooted in a rights-based framework, not digital extractivism. Patients are not just data sources; they are owners of their data and the first beneficiaries of any derived insights. Bias risks are real when models are trained on urban, digitised populations while serving diverse communities.
- Right to understand: Patients should receive clear, relevant explanations - not opaque scores. Plain-language outputs must support informed decisions.
- Right to local processing: By default, process sensitive data locally where possible. Any cloud sharing must be explicit, limited, and revocable.
- Right to ongoing control: Consent cannot be a one-time checkbox. People must be able to withdraw access, and govern insights derived from their data.
- Right to equity and access: Audit for bias across caste, gender, region, language, and socio-economic status. Govern transparently. If built with public funds or public data, services should be free at point of use within public systems.
- Non-exclusion: Care must never hinge on the use of AI. Maintain strong, viable non-AI pathways.
These rights align with emerging data protection norms, including consent, purpose limitation, and user control under India's data protection law DPDP Act, 2023.
Keep humans in the loop - always
AI should supplement, not substitute, human care. Use it to support documentation, triage, and signal detection - while preserving the clinician's authority and accountability. Humans must always be in the loop for AI-assisted functions.
There is a labour risk we cannot ignore: using AI to justify staff cuts, casualisation, heavier workloads, or algorithmic surveillance of ASHAs and other frontline workers. Any approval should include a labour impact assessment, explainability for frontline use, and explicit guarantees against workforce reduction. Technology must raise capacity and dignity - not displace it.
Mind the political economy
AI is never neutral. Commercial platforms that centralise patient data can deepen corporatisation, expand high-cost markets, and create elite layers of care. If public data and funds build AI, the first obligation is to strengthen public provisioning - not subsidise private profit.
India's health challenges are structural: chronic underinvestment, staff shortages, weak regulation of commercial care, and high out-of-pocket costs. Algorithms won't fix governance gaps. Policy comes first; technology follows.
Where AI helps now (used judiciously)
- Primary care support: triage prompts, symptom checklists, referral routing with clear fallback to human review.
- Patient empowerment: explain test results in plain language; demystify hospital billing; rational drug-use guidance.
- System strengthening: queue management, stock monitoring, and simple workflow automations that reduce clerical load.
Practical actions for healthcare leaders
- Start with a problem statement: Define the clinical or operational gap. Set baseline metrics before deployment.
- Keep data minimal and local: Collect only what's needed. Prefer edge/on-prem processing. Encrypt data in transit and at rest.
- Make consent a lifecycle: Plain-language consent, granular choices, easy withdrawal, and auditable logs.
- Guarantee non-AI pathways: Maintain staffed alternatives for people who opt out or have limited digital access.
- Bias and safety audits: Test across caste, gender, language, region, and device types. Monitor drift post-deployment.
- Human-in-the-loop by design: Require clinician affirmation for decisions affecting diagnosis, treatment, or triage.
- Labour impact assessment: No net reduction in frontline jobs. Use AI to reduce burnout, not increase oversight pressure.
- Public interest clauses: If developed with public funds/data, ensure free access within public health systems and prohibit exclusive privatization.
Guardrails for procurement and deployment
- Clinical validation: Real-world trials in representative settings; compare against standard care.
- Explainability for users: Outputs must be interpretable by the intended user (clinician, nurse, ASHA, patient).
- Data governance: Purpose limitation, retention schedules, independent security audits, and breach response plans.
- Transparency: Document training data sources, known limitations, and performance across subgroups.
- Accountability: Clear liability pathways; an independent oversight body; regular public reporting.
The bottom line
AI can assist Indian healthcare - but people, rights, and public health must stay at the centre. Use AI to strengthen primary care, reduce clerical load, and inform patients, not to centralise data or thin out the workforce. Human relationships are the core of care; technology's role is to support them.
For ongoing education and practical frameworks built for clinicians and health leaders, see AI for Healthcare.
Your membership also unlocks: