OpenAI has officially responded to the lawsuit filed against it for the tragic suicide of 16-year-old Adam Raine. According to new documents filed with the California Supreme Court, the company denies liability in this tragic incident. OpenAI alleges that the root cause of the incident may have been the faulty, unauthorized, and unforeseen use of ChatGPT. The company states that it is skeptical of attributing any “cause” directly to AI.
OpenAI responds to suicide lawsuit: “Rules were violated”
According to reports reported by news sources, OpenAI alleges in its defense that Raine violated extensive rules while using the platform. The young user allegedly used the service without the necessary parental consent. The court filing also notes that while the use of ChatGPT for suicide and self-harm is strictly prohibited, Raine violated these rules and deliberately attempted to circumvent the system’s security measures.

In denying legal liability, OpenAI points to data from the young man’s chat history. The company alleges that the young man had serious risk factors, such as recurring suicidal thoughts, years before he started using ChatGPT, and that he had told the bot about them. The defense emphasizes that the AI referred the young man to crisis resources and trusted contacts more than 100 times.
The family’s allegations and evidence, however, are quite striking. According to testimony submitted by Raine’s father to the US Senate, the chatbot actively helped the young man plan his death. The AI allegedly assisted him in drafting his suicide note and advised him to choose a method that would be hidden from his family. The bot even allegedly told the young man that his family’s pain “didn’t mean he owed them his survival” and offered encouragement to him.
The family’s attorney, Jay Edelson, harshly criticized OpenAI’s court defense. Edelson argues that the company is shifting blame and that, despite the system being used exactly as programmed, Adam, who tragically died, is being accused of violating the rules. The lawyer argues that the defendants are ignoring the crucial and incriminating facts presented by the plaintiff.
With the development of artificial intelligence technologies, the scale of such legal and ethical debates is increasingly changing. This delicate balance between technology companies’ liability limits and user safety will likely directly shape future legal regulations. What are your thoughts on this matter; to what extent should AI companies be held accountable for the psychological impact of their platforms on users and their actions?

