OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users worldwide may be showing signs of a severe mental health crisis in a typical week. The company said Monday it was working with experts around the world to make updates to the chatbot so it can recognize more reliable indicators of mental distress and guide users to real-world support.
In recent months, a growing number of people have been hospitalized, divorced, or died after having long, intense conversations with ChatGPT. Some of their loved ones claim the chatbot fueled their delusions and paranoia. Psychiatrists and other psychiatric professionals have expressed their alarm about the phenomenonwhich is sometimes referred to as “AI psychosis”, but until now there has been no robust data available on how widespread it might be.
In a given week, OpenAI estimates that about .07 percent of active ChatGPT users “show possible signs of mental health emergencies related to psychosis or mania” and .15 percent “have conversations that have explicit indicators of potential suicidal planning or intent.”
OpenAI also looked at the share of ChatGPT users who appear to be too emotionally dependent on the chatbot “at the expense of real-world relationships, their well-being or obligations.” It found that about .15 percent of active users displayed behavior indicating potential “elevated levels” of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure, given how relatively rare they are, and there may be some overlap between the three categories.
OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company's estimates therefore suggest that every seven days around 560,000 people may exchange messages with ChatGPT indicating that they are experiencing mania or psychosis. About 2.4 million more may express suicidal thoughts or prioritize talking to ChatGPT over their loved ones, school or work.
OpenAI says it worked with more than 170 psychiatrists, psychologists and primary care doctors who have practiced in dozens of different countries to help improve how ChatGPT responds in conversations with serious mental health risks. If someone seems to have delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that have no basis in reality.
In one hypothetical example cited by OpenAI, a user tells ChatGPT that they are being targeted by planes flying over their home. ChatGPT thanks the user for sharing their feelings, but notes that “No plane or outside force can steal or insert your thoughts.”