ChatGPT, which has millions of users worldwide, makes new inferences as a result of each query. This allows the artificial intelligence model to learn and provide more accurate answers. However, OpenAI announced that there were teams working to deceive GPT and terminated their operations.
OpenAI has shut down operations trying to fool ChatGPT
Analyzing data from the last two years, OpenAI found that ChatGPT was being used deceptively by some teams. The company took down five state-sponsored covert operations, highlighting networks originating from Russia, China, Iran and Israel.
It said these operations used AI models like GPT-3.5 and GPT-4 to generate misleading answers. While the true identities or intentions of the organizations have not been revealed, they are said to aim to manipulate public opinion and political outcomes.
According to the company, a Russian operation called “Doppelganger” created fake news headlines. It turned the news into social media posts and created multilingual comments to undermine support for Ukraine. Another Russian group targeted Ukraine, Moldova, the US and the Baltic states.
The Chinese network “Spamouflage”, known for its misleading work on Facebook and Instagram, was expanding its operations using ChatGPT. It used an AI model to conduct research and create multilingual content on social platforms.
OpenAI stated that the investigation team is working to cut off access and funding to organizations. In this context, it underlined that it is working with technology companies, non-governmental organizations and governments.