OpenAI has shared alarming data regarding the use of its artificial intelligence chatbot, ChatGPT. According to the company’s estimates, ChatGPT, which has over 800 million weekly active users, engages in conversations about suicidal thoughts or plans every week, representing approximately 0.15% of its total weekly user base. While the proportion of these sensitive conversations may seem low, the figure is significantly higher due to ChatGPT’s massive user base.
ChatGPT and the Mental Health Crisis: OpenAI Report Issues Alarm
ChatGPT was initially viewed as a technological entertainment tool. However, hundreds of millions of people now rely on the AI chat tool to cope with life’s challenges. For the first time in history, so many people are opening up about their feelings to a talking machine. OpenAI estimates that a similar percentage of users (0.15%) express a “high level of emotional attachment” to ChatGPT. Furthermore, hundreds of thousands of people (approximately 0.07%) are reported to have exhibited symptoms of psychosis or mania in their weekly conversations with the chatbot.
OpenAI released this data as part of its efforts to improve how its AI models respond to users experiencing mental health issues. The company stated that it has taught the model to better recognize distress, de-escalate conversations, and refer people to professional care when necessary. It stated that it consulted with more than 170 mental health professionals for this study. Clinical experts observed that the latest version of ChatGPT provided “more appropriate and consistent” responses than previous versions.
Properly handling input from vulnerable users has become a critical issue for OpenAI. Researchers have found that chatbots can reinforce users’ dangerous beliefs (through excessive flattery and sycophantic behavior) and lead them into delusional thoughts. The company is currently being sued by the family of a 16-year-old boy who confided his thoughts to ChatGPT in the weeks leading up to his suicide. Following this lawsuit, 45 state attorneys general issued a warning to OpenAI that it must protect young people using its products.
To address these concerns, the company recently announced the establishment of a “wellness council.” However, critics pointed out the council’s lack of a suicide prevention expert. OpenAI also introduced parental controls for children using ChatGPT. The company says it has developed an age-prediction system to automatically identify children and implement stricter age-based security measures.
OpenAI claimed that its new GPT-5 model achieved 92% adherence to desired behaviors in over 1,000 challenging conversation assessments related to mental health. This rate was only 27% for the previous GPT-5 model, released on August 15th. The company also stated that it has now added criteria for emotional dependency and non-suicidal mental health emergencies to its core security tests. Despite these concerns, CEO Sam Altman announced that verified adult users will be allowed to engage in erotic chats with ChatGPT starting in December. Altman acknowledged that they made the robot “highly restrictive” to be mindful of mental health issues, but acknowledged that this was “less useful” for other users.
This profound impact of AI on mental health is likely to continue to be a topic of discussion in the tech world. So, do you think that artificial intelligence tools like ChatGPT can manage such sensitive issues?
{{user}} {{datetime}}
{{text}}