A leading US medical journal has issued a warning about using ChatGPT for health-related information after a man developed a rare condition following advice from the chatbot.
The Annals of Internal Medicine published a case study detailing how a 60-year-old man developed bromism, or bromide toxicity, after consulting ChatGPT about removing table salt from his diet.
Bromism was once a well-recognised condition in the early 20th century, thought to have contributed to up to one in 10 psychiatric admissions. In this case, the patient told doctors that after reading about the harmful effects of sodium chloride, or table salt, he sought guidance from ChatGPT about eliminating chloride from his diet. Following the chatbot’s suggestion, he began taking sodium bromide for three months.
While ChatGPT had reportedly told the man that “chloride can be swapped with bromide, though likely for other purposes, such as cleaning,” it did not issue any health warnings or clarify why the patient was seeking such advice. Sodium bromide was historically used as a sedative in the early 20th century.
The authors of the study, from the University of Washington in Seattle, emphasised that this case raised concerns about how artificial intelligence could contribute to preventable health risks. They also noted that, due to not having access to the patient’s ChatGPT conversation logs, it was impossible to fully determine the nature of the advice provided.
When the authors themselves consulted ChatGPT about the possible substitution of chloride, the response included bromide without any safety warnings or questions about the patient’s health needs, an oversight the researchers felt a medical professional would not have made.
Authors give warning about AI models
In the article, the authors warned that AI models like ChatGPT could “generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”
Last week, the company announced an update to the chatbot, powered by the GPT-5 model, claiming that the upgrade would improve health-related responses and allow the system to “flag potential concerns” such as physical or mental illness. However, OpenAI reiterated that ChatGPT is not a replacement for professional medical advice.
The Annals article was published before the launch of GPT-5 and referred to an earlier version of the chatbot, which the patient is believed to have used.
While the authors acknowledged that AI has the potential to bridge gaps between scientific knowledge and the public, they highlighted the risks of “decontextualised information” and noted that a medical professional would likely never suggest sodium bromide as a replacement for table salt.
They also warned that doctors must be vigilant in checking where their patients are obtaining health information.