ChatGPT raised curiosity after it abruptly ended conversations when it encountered certain names. Some users revealed that the AI assistant stopped responding after they entered certain names. But why does ChatGPT apply such censorship?
ChatGPT creates controversy with name censorship
The popular chatbot stops responding after entering some names. The first tests included names as different as David Mayer, David Faber, Brian Hoods, Guido Scorza, Jonathan Turley and Jonathan Zittrain.
It was stated that ChatGPT refused to respond when commands with these names were entered and gave the message “It prevents me from generating responses”.
OpenAI, on the other hand, remained silent about what the problem with these names was, fueling speculation. Some theories range from privacy protections and data biases in educational data to flaws in the chatbot’s filtering system.
When asked about its response, ChatGPT cited privacy concerns, data biases and incomplete data sets as reasons. Some users said it was able to process the name David Mayer, but still acted inconsistently.
Users believe that people trying to remove information from online platforms may attract more attention through GDPR requests or legal action. It was stated that the AI assistant did not respond to these concerns.
Of course, ChatGPT and Google Gemini know that many assistants do not respond to famous names. Gemini even disabled its image processing ability depending on the name for a while.