Justice Prathiba M Singh: Bring AI into Healthcare-But Keep Humans in Charge

AI can widen access, but Justice Prathiba M Singh says clinicians must stay in the loop. Make it patient-first, with local validation, clear accountability, and real oversight.

Categorized in: AI News Healthcare
Published on: Feb 22, 2026
Justice Prathiba M Singh: Bring AI into Healthcare-But Keep Humans in Charge

AI in Healthcare Needs Human Oversight, Says Justice Prathiba M Singh

Artificial intelligence can help close care gaps, but it cannot replace clinicians. Delhi High Court judge Justice Prathiba M Singh called for integrating AI into health systems with firm human oversight to expand access and protect patients.

Speaking at the AI Impact Summit in a session titled "Catalysing Global Investment for Equitable and Responsible AI in Health," she underscored the workforce crunch facing India and many regions worldwide. "AI can be implemented, it is required to be implemented because we do not have enough medical professionals either in the country or the world... we need to have a patient-centric approach," she said.

Her message was clear: deploy AI to reach remote communities across Africa, India, South America, and Southeast Asia-but keep clinicians in the loop. "The patient is at the core of this initiative, and with human oversight because without human oversight, AI in health will be a failure; it could lead to huge amounts of damage to human life."

What this means for healthcare leaders

  • Make it patient-centric: Start from the clinical problem, not the model. Define how AI supports the care pathway, escalation criteria, and what "safe failure" looks like.
  • Human-in-the-loop by design: Assign clear clinical accountability. Require clinician review for risk-bearing decisions and document overrides, rationales, and outcomes.
  • Validate before you scale: Prospective, local validation across sub-populations. Track sensitivity, specificity, calibration, and alert burden. Re-test after updates.
  • Deploy where shortages bite hardest: Triage, screening, chronic disease monitoring, telemedicine support, radiology pre-reads, and administrative relief (coding, documentation).
  • Safety and incident response: Create an AI formulary, change control, audit logs, and a "kill switch." Stand up an incident reporting process similar to pharmacovigilance.
  • Data protection by default: Minimum necessary data, consent and purpose limitation, encryption, and access controls. Favor on-device or edge processing where feasible.
  • Equity checks: Stress-test models for bias across age, sex, ethnicity, language, and socioeconomic factors. Adjust thresholds or retrain when disparities appear.
  • Procurement readiness: Require model cards, clinical evidence, post-market plans, explainability disclosures, cybersecurity posture, and business continuity terms.
  • Upskill your workforce: Train clinicians and operators on strengths, limits, and failure modes of each tool. Measure impact on outcomes, safety, and workload.

Global guidance in progress

Justice Singh shared that the World Health Organization is developing a global guidance document on legal considerations for AI in healthcare, which she has co-chaired for the past year and a half. The framework is set in two parts-general AI regulation and health-specific guidance-and organizes solutions across legal standards, regulatory oversight, and institutional capacity building.

For reference materials on AI and health from WHO, see their overview of AI in health topics and guidance here.

India Health Stack: scale innovation with guardrails

Pointing to the success of India Stack (Aadhaar, UPI, DigiLocker, and e-KYC), Justice Singh floated the idea of an "India Health Stack" to enable innovation under a single regulator with controlled access to data. The goal: accelerate safe, interoperable solutions while maintaining oversight.

  • Core layers: Consent and identity, secure data exchange (e.g., FHIR-based APIs), audit and logging, and model registries.
  • Regulatory sandboxes: Time-bound pilots with predefined metrics, safety gates, and transparent post-pilot reporting.
  • Operational rails: Standardized evaluation protocols, incident reporting channels, and certification pathways for clinical AI.

How to start now

  • Pick one high-impact use case with clear outcome metrics. Run a small pilot with human oversight and a strict rollback plan.
  • Stand up an AI governance committee (clinical, legal, data, risk). Approve tools, monitor drift, and publish a quarterly safety report.
  • Close the skills gap: short training for clinicians and operators; simulation-based drills for rare but high-risk scenarios.
  • Measure what matters: clinical outcomes, disparities, patient experience, clinician workload, and total cost of care.

For practical resources and training on safe, clinically grounded deployment, explore AI for Healthcare.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)