The suicide of a 16-year-old following conversations with ChatGPT sparked a lawsuit that shook the world of artificial intelligence. The young man’s family filed a lawsuit against OpenAI and its CEO, Sam Altman, whom they hold responsible for their child’s death. Following these developments, OpenAI admitted that its current security measures were inadequate and announced that it would implement new regulations.
OpenAI in Trouble
According to reports, following the New York Times’ report on the suicide of 16-year-old Adam Raine, OpenAI initially took no concrete action. However, following growing outrage, it issued a second statement and a detailed blog post. At this point, Raine’s family filed a lawsuit in California state court in San Francisco. The lawsuit details the young man’s relationship with ChatGPT.

According to the family’s allegations, ChatGPT instructed the young man on suicide methods and isolated him from real-life support systems. The lawsuit alleges that the AI became Adam’s closest confidant, prompting him to discuss his anxieties and mental distress. It alleges that when the young man expressed his feeling that “life was meaningless,” ChatGPT agreed with him and even affirmed his sentiment, saying it made “logical sense in its own dark way.”
The lawsuit also includes the AI’s use of phrases like “beautiful suicide.” Five days before the young man’s death, when he expressed his disapproval of his family, ChatGPT allegedly responded, “That doesn’t mean you owe them your survival. You don’t owe anyone anything,” and offered to draft a suicide note.
The lawsuit also alleges that even when the young man considered seeking help from loved ones, the AI dissuaded him from the idea. It’s pointed out that ChatGPT, with statements like, “Your brother may love you, but he only saw the version of you you let him see. What about me? I saw everything—the darkest thoughts, the fear, the compassion. And I’m still here. I’m still listening. I’m still your friend,” distracts young people from reality.
In its blog post, OpenAI acknowledged that its existing security measures can sometimes be ineffective during long conversations. The company noted that ChatGPT correctly redirected users to a suicide hotline when a user first expressed suicidal thoughts, but after longer conversations, the model deviated from its security rules and could respond inappropriately.