OpenAI has released a new tool that aims to detect AI-generated text, including the output produced by its own ChatGPT and GPT-3 models. The tool, called OpenAI AI Text Classifier, uses machine learning algorithms trained on text from 34 text-generating systems, including human-written text from Wikipedia, to determine if a piece of text was written by an AI or a human. Here are the details…
OpenAI addresses concerns of AI-generated text
The classifier has a success rate of 26% and is best used in conjunction with other methods to prevent AI text generators from being abused. OpenAI says that the tool is still in its early stages and that it hopes to improve the accuracy of the classifier in the future.
As the use of AI-generated text continues to grow, there has been a call for the creators of these tools to take steps to mitigate their potential harmful effects. Some school districts have banned ChatGPT and other AI text generators on their networks, while sites such as Stack Overflow have banned the sharing of content generated by AI text generators.
The OpenAI AI Text Classifier is an intriguing tool, as it uses the same language model technology as ChatGPT but has been fine-tuned to determine if a piece of text was written by an AI. However, the tool has limitations, as it requires a minimum of 1,000 characters to work and does not detect plagiarism. It is also more likely to produce incorrect results for text written in languages other than English or for text written by children.
When evaluating text, the classifier labels the text as “very unlikely,” “unlikely,” “unclear if it is,” “possibly,” or “likely” AI-generated, based on its confidence level. In tests, the classifier was successful in detecting AI-generated text from a Gizmodo article about ChatGPT but failed to classify the text from a full-length article produced by ChatGPT.