OpenAI has introduced a new artificial intelligence system called CriticGPT that will help train ChatGPT by identifying errors in its responses. The model to be used to address ChatGPT’s vulnerabilities is a mix of “human + ChatGPT”.
OpenAI will find erroneous information with CriticGPT
ChatGPT continues to take the technology world by storm with its chatting capabilities. However, as with any artificial intelligence, it can produce inaccurate and erroneous results. Detecting and correcting these errors is an important part of ChatGPT.
As ChatGPT grows and collects more information, this task becomes more difficult. In this context, OpenAI has chosen to develop a new artificial intelligence model that sifts errors more finely. The model, called CriticGPT, is based on the GPT language model and its main purpose will be to detect incorrect errors/information.
In the tests conducted, OpenAI stated that it was able to capture 60 percent of the criticisms from the “human+ChatGPT team” with CriticGPT. This suggests that ChatGPT’s responses can be evaluated more effectively in the future.
CriticGPT was trained by finding ChatGPT’s bugs as well as bugs added by users. However, OpenAI noted that it has limitations, saying it can only analyze short responses. In this context, it says it will use GPT-4 to find GPT-4’s errors.
Erroneous information actually leads to compromised artificial intelligence. For example, there are some claims that ChatGPT will be banned in China. OpenAI has not made an official statement that it will withdraw from the Chinese market. However, it seems that a serious domino effect will arise after the company announces this decision
{{user}} {{datetime}}
{{text}}