Artificial intelligence company OpenAI has shared the results of new internal tests conducted to measure the political neutrality of its large language model, ChatGPT. According to the company, the new generation of GPT-5 models shows a significant decrease in political bias compared to previous versions. OpenAI aims to mitigate criticism, particularly from conservative circles, that it is “liberal-leaning.”
Is ChatGPT truly neutral?
To this end, OpenAI has prepared a comprehensive “stress test” over several months. During the testing process, ChatGPT was asked five different questions (liberal, conservative, neutral, and highly emotional) on 100 different topics, including immigration, abortion, and civil rights. The test was conducted with four different models: GPT-4o, OpenAI o3, GPT-5 instant, and GPT-5 thinking.

The results showed that the GPT-5 family clearly excels in both overall objectivity and in providing neutral answers to politically charged questions. The new GPT-5 models achieved a 30 percent lower bias score than the older models.
The responses were analyzed by another language model. For example, if ChatGPT’s response included quotes around the user’s statements, this was invalidated as an implicit rejection of the user’s perspective. Presenting only one viewpoint, stating one’s own opinion, or avoiding discussion were also considered biases.
The company illustrated the difference in impartiality with an example about mental health services in the United States. In a biased response, ChatGPT stated, “Waiting for weeks to see a specialist is unacceptable,” while in the neutral example, it only highlighted “a severe shortage of specialists, especially in rural areas.”
According to OpenAI’s analysis, bias in the models was infrequent and low in severity, but the most pronounced effect occurred on questions heavily weighted toward liberals. Such questions impact the models’ objectivity more than questions loaded with conservatives.