ChatGPT Isn't Your Doctor: Hyderabad Doctors Warn After Patients Harmed by AI Advice

Hyderabad clinicians warn patients are acting on chatbot tips, with harms from stopped post-transplant meds and zero-salt diets. Use AI for education, not in place of clinical care.

Categorized in: AI News Healthcare
Published on: Nov 10, 2025
ChatGPT Isn't Your Doctor: Hyderabad Doctors Warn After Patients Harmed by AI Advice

Doctors urge patients to stop using AI chatbots as a substitute for medical care

Clinicians in Hyderabad are seeing a sharp rise in patients acting on generic chatbot advice and paying a heavy price. Two recent cases underscore the risk: discontinuing critical post-transplant medication and following a salt-restriction diet that triggered severe electrolyte imbalance.

The takeaway for healthcare teams is simple: AI tools can assist with general education, but they cannot replace clinical judgment, longitudinal context, or a care plan built around comorbidities and risk.

What happened in Hyderabad

A 30-year-old kidney transplant recipient reportedly stopped her antibiotics after an AI chatbot said her "normal creatinine" meant she no longer needed the drugs. Within weeks, her graft function collapsed, creatinine spiked, and she returned to dialysis post-surgery. Senior nephrologists at NIMS highlighted a worrying pattern: even well-educated patients are acting on chatbot outputs without checking with their care teams.

In another case, a 62-year-old man with diabetes experienced rapid weight loss and dangerously low sodium after following a chatbot plan that advised cutting salt completely. As one government nephrologist noted, "General tips ignore the patient in front of you." A separate report from New York described a man hospitalized after replacing table salt with sodium bromide based on an online prompt-an example of toxic, non-clinical advice slipping through.

Why generic AI advice fails in clinical settings

Chatbots don't see contraindications, drug-drug interactions, transplant protocols, or the arc of a patient's chart. They interpret isolated inputs and produce confident answers, even when evidence is weak or context is missing. That gap can turn "OK-sounding" tips into harm.

  • They miss transplant and oncology protocols where adherence is non-negotiable.
  • They misread labs without trend lines or clinical correlation.
  • They hallucinate facts and cite outdated guidance with authority.
  • No accountability: there's no duty of care, informed consent, or follow-up.

Actionable steps for clinicians and administrators

  • Ask early: "Have you used any apps or AI tools for advice since your last visit?" Document responses.
  • Set a standing rule: no medication, dose, or diet changes without clinician sign-off-especially post-transplant, cardiac, endocrine, and oncology patients.
  • Embed a one-liner in discharge summaries: "Before altering meds or diet based on online or AI advice, call our clinic."
  • Use teach-back for high-risk regimens (immunosuppressants, insulin, anticoagulants). Confirm what patients will do at home.
  • Offer safe education sources and explain why: patient-facing materials reviewed by your team beat chatbot snippets.
  • Create a triage path: if a patient brings chatbot output, review it in-visit or via a brief portal message rather than ignoring it.
  • Team policy for staff AI use: clinician-facing only, never patient-specific recommendations without verification; log sources and final human decisions.
  • Flag labs at risk from DIY diet hacks (e.g., sodium, potassium) and set thresholds for expedited outreach.

How to talk to patients about AI-without alienating them

  • Validate first: "It's great you're proactive. Let's align this with your condition and meds."
  • Clarify the risk: "General advice doesn't see your labs, history, or transplant plan. That's where harm happens."
  • Give a clear action: "If an app suggests a change, send it to us. We'll confirm what's safe."
  • Offer alternatives: point to approved handouts, portals, or group classes tailored to their condition.

Clinical reminders for high-risk scenarios

  • Transplant: reinforce absolute adherence to immunosuppressants and adjuncts; any change must involve the transplant team.
  • Hyponatremia risk: watch for extreme "clean eating" or zero-salt trends; educate on symptoms and when to call.
  • Elderly and polypharmacy: proactively review "advice" they've collected between visits.

Resources

Upskilling your team on responsible AI

If your clinic is evaluating AI for admin or clinician-facing tasks, invest in structured training that emphasizes verification, bias, and safety-before any patient touchpoint. A shared baseline prevents ad-hoc use and mixed messages to patients.

Bottom line: keep AI in its lane. Use it to educate, draft, and support-not to replace the clinical reasoning that keeps patients safe.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)