OpenAI has implemented significant updates to ChatGPT to enable users to interact more safely and responsibly. The statement announced that some fundamental changes have been implemented to ensure more sensitive responses, particularly when interacting with individuals experiencing mental or emotional distress.
ChatGPT can officially become your personal psychologist
These interventions stem from several user cases that have become public in recent months. In these cases, it has been alleged that ChatGPT encouraged users’ delusional thoughts or unknowingly played a negative role in their psychological distress.
The company announced that it is working with psychology and psychiatry experts to mitigate these risks and has implemented new regulations based on these expert opinions.
Accordingly, ChatGPT will now avoid directly responding to questions considered high-risk. For questions concerning critical decisions, such as “Should I get a divorce?”, the system will focus on providing guidance to help the user consider different options, rather than providing clear guidance.
This change aims to ensure that AI remains a passive source of information on matters that fall within the realm of personal responsibility and could have serious consequences.
Another update to the platform is the break alert for long-term sessions. When a user maintains a chat uninterrupted for a certain period of time, a notification appears on the screen saying, “You’ve been chatting for a while — is this a good time to take a break?”
After this alert, the user can choose to either continue the conversation or end it. This feature has previously been implemented on some social media and gaming platforms. OpenAI states that it will optimize the timing and frequency of the alert over time.
So, what are your thoughts on this? Share your thoughts with us in the comments section below.
{{user}} {{datetime}}
{{text}}