India Health Summit 2025 Urges Clinician-Led, Ethical AI: Enabler, Not Replacement
At Times Network India Health Summit, leaders said AI should aid, not replace, clinicians. Validate tools, secure data, integrate into care, and build prevention habits early.

Times Network India Health Summit & Awards 2025: AI Is an Enabler, Not a Replacement
At the Times Network India Health Summit 2025, clinicians and healthcare leaders aligned on a simple rule: use AI to sharpen judgment, not replace it. The panel on AI and Data-Driven Care brought together Arvind Pachhapur (Strand Life Sciences), Dr Abel George (Apollo Adlux Hospital), and Deepak Sahni (Healthians) to lay down what responsible adoption looks like inside Indian hospitals.
The clinician stays accountable
Dr George was direct: the hospital must meet regulations and approvals, but the clinician remains the primary decision-maker. "He should not be using AI as a replacement, but rather should be using an enabler." AI can augment diagnostics, triage, and treatment planning, but clinical context and judgment decide the next step.
Validate models, then trust them-carefully
Pachhapur stressed that every AI tool demands validation and confirmation before deployment. Ethical use in India needs a framework that respects global standards and fits India's realities-workforce constraints, diverse populations, and variable infrastructure. That means local data, prospective evaluation, and ongoing performance checks in real clinical settings.
Data privacy and security are non-negotiable
India's health data is still fragmented-paper files, duplicate identifiers, and inconsistent formats raise confidentiality risks. The Digital Personal Data Protection Act (2023) sets principles, but execution must catch up. Strong anonymization, encryption in transit and at rest, and strict access controls are table stakes as AI models learn from large datasets.
Reference materials: Digital Personal Data Protection Act, 2023 (Official Gazette) | U.S. FDA resources on AI/ML in SaMD
Build health as a habit, starting young
Prevention came through as a clear opportunity. Deepak Sahni argued for using data and school programs to normalize daily exercise and basic health literacy from early childhood. "If you tell someone in nursery that 20-30 minutes of daily exercise is important from that age, you will see these diseases don't happen."
What healthcare leaders should do next
- Define clinical ownership: name the responsible clinician for any AI-assisted decision.
- Set pre-deployment gates: external validation, bias checks, and prospective pilots on local cohorts.
- Establish a data policy: consent flows, de-identification, encryption, retention, and audit trails.
- Integrate into workflows: surface AI outputs inside existing EHR/diagnostic systems with clear UX and confidence scores.
- Create override and fallback paths: clinicians can ignore or counter AI recommendations without friction.
- Monitor in production: track accuracy, drift, false positives/negatives, and equity across subgroups.
- Train teams: short modules for clinicians, nurses, tech staff, and admins on capabilities and limits.
- Be transparent with patients: explain how AI assists care and how their data is protected.
- Vendor due diligence: review training data sources, regulatory status, security posture, and update cycles.
- Governance: form an AI safety committee to review incidents and approve major model changes.
Bottom line for clinicians
Treat AI like a skilled assistant: useful, fast, and fallible. Keep clinical judgment at the center, demand validation, and protect patient data at every step. The result is safer care, less friction for staff, and earlier prevention for communities.
Want to upskill your team on AI basics and clinical use cases? Explore role-based options here: AI courses by job.