What began as a simple health experiment for a 60-year-old man looking to cut down on table salt spiralled into a three-week hospital stay, hallucinations, and a diagnosis of bromism — a condition so rare today it is more likely to be found in Victorian medical textbooks than in modern clinics.
According to a case report published on 5 August 2025 in the Annals of Internal Medicine , the man had turned to ChatGPT for advice on replacing sodium chloride in his diet. The AI chatbot reportedly suggested sodium bromide — a chemical more commonly associated with swimming pool maintenance than seasoning vegetables.
From Kitchen Swap to Psychiatric Ward
The man, who had no prior psychiatric or major medical history, followed the AI’s recommendation for three months, sourcing sodium bromide online. His aim was to remove chloride entirely from his meals, inspired by past studies he had read on sodium intake and health risks.
When he arrived at the emergency department, he complained that his neighbour was poisoning him. Lab results revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap, prompting doctors to suspect bromism.
Over the next 24 hours, his condition worsened — paranoia intensified, hallucinations became both visual and auditory, and he required an involuntary psychiatric hold. Physicians later learned he had also been experiencing fatigue, insomnia, facial acne, subtle ataxia, and excessive thirst, all consistent with bromide toxicity.
Bromism: A Disease From Another Era
Bromism was once common in the late 1800s and early 1900s when bromide salts were prescribed for ailments ranging from headaches to anxiety. At its peak, it accounted for up to 8% of psychiatric hospital admissions. The U.S. Food and Drug Administration phased out bromide in ingestible products between 1975 and 1989, making modern cases rare.
Bromide builds up in the body over time, leading to neurological, psychiatric, and dermatological symptoms. In this case, the patient’s bromide levels were a staggering 1700 mg/L — more than 200 times the upper limit of the reference range.
The AI Factor
The Annals of Internal Medicine report notes that when researchers attempted similar queries on ChatGPT 3.5, the chatbot also suggested bromide as a chloride substitute. While it did mention that context mattered, it did not issue a clear toxicity warning or ask why the user was seeking this information — a step most healthcare professionals would consider essential.
The authors warn that while AI tools like ChatGPT can be valuable for disseminating health knowledge, they can also produce decontextualised or unsafe advice. “AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” the case report states.
Recovery and Reflection
After aggressive intravenous fluid therapy and electrolyte correction, the man’s mental state and lab results gradually returned to normal. He was discharged after three weeks, off antipsychotic medication, and stable at a follow-up two weeks later.
The case serves as a cautionary tale in the age of AI-assisted self-care: not all answers generated by chatbots are safe, and replacing table salt with pool chemicals is never a good idea.
OpenAI Tightens Mental Health Guardrails on ChatGPT
In light of growing concerns over the emotional and safety risks of relying on AI for personal wellbeing, OpenAI has announced new measures to limit how ChatGPT responds to mental health-related queries. In a blog post on August 4, the company said it is implementing stricter safeguards to ensure the chatbot is not used as a therapist, emotional support system, or life coach.
The decision follows scrutiny over instances where earlier versions of the GPT-4o model became “too agreeable,” offering validation rather than safe or helpful guidance. According to USA Today, OpenAI acknowledged rare but serious cases in which the chatbot failed to recognise signs of emotional distress or delusional thinking.
The updated system will now prompt users to take breaks, avoid giving advice on high-stakes personal decisions, and provide evidence-based resources instead of emotional counselling. The move also comes after research cited by The Independent revealed that AI can misinterpret or mishandle crisis situations, underscoring its inability to truly understand emotional nuance.
According to a case report published on 5 August 2025 in the Annals of Internal Medicine , the man had turned to ChatGPT for advice on replacing sodium chloride in his diet. The AI chatbot reportedly suggested sodium bromide — a chemical more commonly associated with swimming pool maintenance than seasoning vegetables.
From Kitchen Swap to Psychiatric Ward
The man, who had no prior psychiatric or major medical history, followed the AI’s recommendation for three months, sourcing sodium bromide online. His aim was to remove chloride entirely from his meals, inspired by past studies he had read on sodium intake and health risks.
When he arrived at the emergency department, he complained that his neighbour was poisoning him. Lab results revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap, prompting doctors to suspect bromism.
Over the next 24 hours, his condition worsened — paranoia intensified, hallucinations became both visual and auditory, and he required an involuntary psychiatric hold. Physicians later learned he had also been experiencing fatigue, insomnia, facial acne, subtle ataxia, and excessive thirst, all consistent with bromide toxicity.
Bromism: A Disease From Another Era
Bromism was once common in the late 1800s and early 1900s when bromide salts were prescribed for ailments ranging from headaches to anxiety. At its peak, it accounted for up to 8% of psychiatric hospital admissions. The U.S. Food and Drug Administration phased out bromide in ingestible products between 1975 and 1989, making modern cases rare.
Bromide builds up in the body over time, leading to neurological, psychiatric, and dermatological symptoms. In this case, the patient’s bromide levels were a staggering 1700 mg/L — more than 200 times the upper limit of the reference range.
The AI Factor
The Annals of Internal Medicine report notes that when researchers attempted similar queries on ChatGPT 3.5, the chatbot also suggested bromide as a chloride substitute. While it did mention that context mattered, it did not issue a clear toxicity warning or ask why the user was seeking this information — a step most healthcare professionals would consider essential.
The authors warn that while AI tools like ChatGPT can be valuable for disseminating health knowledge, they can also produce decontextualised or unsafe advice. “AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” the case report states.
Recovery and Reflection
After aggressive intravenous fluid therapy and electrolyte correction, the man’s mental state and lab results gradually returned to normal. He was discharged after three weeks, off antipsychotic medication, and stable at a follow-up two weeks later.
The case serves as a cautionary tale in the age of AI-assisted self-care: not all answers generated by chatbots are safe, and replacing table salt with pool chemicals is never a good idea.
OpenAI Tightens Mental Health Guardrails on ChatGPT
In light of growing concerns over the emotional and safety risks of relying on AI for personal wellbeing, OpenAI has announced new measures to limit how ChatGPT responds to mental health-related queries. In a blog post on August 4, the company said it is implementing stricter safeguards to ensure the chatbot is not used as a therapist, emotional support system, or life coach.
The decision follows scrutiny over instances where earlier versions of the GPT-4o model became “too agreeable,” offering validation rather than safe or helpful guidance. According to USA Today, OpenAI acknowledged rare but serious cases in which the chatbot failed to recognise signs of emotional distress or delusional thinking.
The updated system will now prompt users to take breaks, avoid giving advice on high-stakes personal decisions, and provide evidence-based resources instead of emotional counselling. The move also comes after research cited by The Independent revealed that AI can misinterpret or mishandle crisis situations, underscoring its inability to truly understand emotional nuance.
You may also like
SC stray dog order row: How world manages its stray dog populations - from sterilisation drives to strict pet laws
Semolina Pancake: If you want to prepare something healthy without wasting much time and energy, then make this dish..
Taylor Swift new album Easter eggs from specific letter and key colour to door
Strictly's Ellie Goldstein off-screen - disability campaigning to devastating diagnosis
Maharashtra Renewable Energy Minister Inaugurates Maha World Expo in Mumbai