OpenAI, one of the most well-known technology companies in recent times, was sued over ChatGPT’s lies. ChatGPT, a productive artificial intelligence-based chatbot, is trained with a language model called “GPT”. This language model is not always accurate. In fact, the chatbot often makes mistakes even on some very basic topics. It is absolutely necessary not to blindly believe in the system.
ChatGPT’s generation of false information is giving OpenAI a problem
This issue is once again on the agenda today because ChatGPT generated very false/fake information about Mark Walters, a radio host living in Georgia. The chatbot accused Walters of defrauding and embezzling from a non-profit organization. However, this was not the case. The system reportedly generated the fake information in response to a request from a journalist named Fred Riehl. Upon learning of this, Mark Walters sued OpenAI. It was reported that this was the first libel/defamation lawsuit against OpenAI.
There are many artificial intelligence experts who express concern about chatbots generating false and misleading information. Developers need to take very specific and careful steps in this regard. In today’s internet world, since no one fully questions anything, reputations that took years to build can be destroyed in minutes because of a chatbot’s answer. OpenAI’s statement to people using ChatGPT on this issue is simply this: “The system may occasionally generate incorrect information…”
The trust we mentioned in the introduction was previously demonstrated to the world by a US lawyer. According to The New York Times, Steven Schwartz, a lawyer from the law firm Levidow, Levidow and Oberman, recently turned to OpenAI’s chatbot for help writing a legal brief. ChatGPT helped the lawyer with this. But in doing so, it generated false information that was not true. This, of course, created a big problem to the lawyer when it was discovered.