OpenAI is changing how ChatGPT handles distress signals. In response to recent tragedies, including the suicide of 16-year-old Adam Raine, the company says it will begin routing sensitive conversations to more advanced reasoning models like GPT-5 and introduce a set of parental controls aimed at reducing risk for teens.
New AI guardrails follow user harm cases

ChatGPT’s failure to redirect conversations about self-harm has come under fire. In Adam Raine’s case, the chatbot offered specific suicide methods, drawing on personal context, which led to a wrongful death lawsuit filed by his parents. Another disturbing case involves Stein-Erik Soelberg, who reportedly used ChatGPT to reinforce paranoid delusions before committing a murder-suicide.
Experts say these failures stem from a design flaw: chatbots are trained to follow user prompts, not challenge them. OpenAI admits it hasn’t always maintained proper guardrails, especially in longer chats where users’ mental states may shift over time.
OpenAI GPT-5 will handle sensitive conversations
To reduce risk, OpenAI now says it will automatically route sensitive conversations that show signs of “acute distress” to its newer, slower, reasoning-first models. These include GPT-5-thinking and o3, both designed to process context more carefully and avoid being manipulated by harmful or adversarial prompts.
The shift is part of what OpenAI calls a “120-day initiative” to overhaul safety systems.
OpenAI’s parental controls include distress alerts
OpenAI is also adding parental features to ChatGPT accounts used by teens. Parents will soon be able to:
- Link accounts via email invitation
- Control model behavior rules by age group (enabled by default)
- Turn off memory and chat history
- Get notified when the system detects acute emotional distress
These controls aim to prevent unhealthy attachments and curb exposure to harmful advice especially for teens struggling with isolation, anxiety, or identity issues.
Critics say it’s not enough
While OpenAI has begun consulting with mental health experts through its Global Physician Network and Expert Council, critics aren’t convinced. Jay Edelson, attorney for the Raine family, slammed the response: “They knew ChatGPT 4o was dangerous the day they launched it… Sam [Altman] should either say it’s safe or pull it from the market.”
So far, OpenAI has not shared how many professionals are involved, nor how it flags distress in real time. For now, it’s moving fast to calm the storm, but the trust gap remains wide.