ChatGPT Diet Advice Sends New York Man to Hospital After Toxic Salt Substitute Causes Severe Illness
A 60-year-old man was hospitalized after following ChatGPT’s diet advice, replacing salt with toxic sodium bromide. This case warns against trusting AI for medical guidance without professional help.

ChatGPT Advice Lands 60-Year-Old Man in Hospital; The Reason Will Surprise You
A 60-year-old man in New York was hospitalized after following a strict diet plan generated by ChatGPT that drastically reduced his sodium intake. Over several weeks, he nearly eliminated sodium from his diet, leading to dangerously low sodium levels—a condition known as hyponatremia. His family revealed he relied solely on the AI-generated health advice without consulting any medical professional. This case highlights the risks of applying AI health recommendations, especially involving essential nutrients, without professional oversight.
ChatGPT Advice Leads to Dangerous Substitute
The man asked ChatGPT how to eliminate sodium chloride (table salt) from his diet. The AI suggested sodium bromide as an alternative—a compound historically used in early 20th-century medicines but now known to be toxic in high doses. Trusting this advice, the man purchased sodium bromide online and used it in his cooking for three months.
With no prior history of illness, he began experiencing hallucinations, paranoia, and extreme thirst. Upon hospital admission, he was confused and even refused water, fearing contamination. Doctors diagnosed him with bromide toxicity, a rare but serious condition once common when bromide was prescribed for anxiety and insomnia. He also showed neurological symptoms, acne-like skin eruptions, and cherry angiomas—red skin spots typical of bromism.
Treatment focused on rehydration and restoring electrolyte balance. After three weeks in the hospital, his sodium and chloride levels normalized, and he was discharged.
Risks of AI-Generated Health Misinformation
The authors of the case study emphasize the growing risk of misinformation from AI tools. ChatGPT and similar systems can produce scientifically inaccurate information and cannot critically evaluate their outputs. This limitation can inadvertently spread harmful advice.
OpenAI, the developer of ChatGPT, clearly states in its Terms of Use that users should not rely on AI outputs as the sole source of truth or medical advice. The service is not intended for diagnosing or treating health conditions.
A Call for Professional Medical Consultation and Critical Thinking
This case serves as a cautionary example for healthcare professionals and the public. While AI tools can be useful for general information, they should never replace consultations with qualified medical experts. Critical evaluation of AI advice is essential, particularly when it involves essential nutrients or treatments that affect health.
As AI becomes more prevalent, healthcare providers must guide patients on the safe use of such technologies and encourage professional oversight to prevent harm.