A lawyer in the US who used OpenAI’s ChatGPT chatbot ChatGPT as part of a legal case got into trouble. ChatGPT is a productive artificial intelligence-based chatbot, which is trained with a language model called “GPT”. Unfortunately, this language model is not always accurate. In fact, the chatbot often makes mistakes even on some very basic topics. It is absolutely necessary not to blindly believe in the system. A lawyer living in the US showed this to the whole world very clearly.
Lawyer Schwartz shows us not to believe ChatGPT without questioning
Steven Schwartz, a lawyer at the law firm Levidow, Levidow and Oberman, recently turned to OpenAI’s chatbot for help writing a legal brief, The New York Times reported. Steven Schwartz’s firm sued Colombian airline Avianca on behalf of Roberto Mata, who claimed he was injured during a flight to John F. Kennedy International Airport in New York. Colombian airline Avianca thought it was right and asked a federal judge to dismiss the case. This is exactly where ChatGPT got involved.
Mata’s lawyer prepared a 10-page brief to show why the case should continue. The brief cited numerous court decisions, including “Varghese v. China Southern Airlines”, “Martinez v. Delta Airlines” and “Miller v. United Airlines”. But these were not real because ChatGPT had made them up. The lawyer admitted that he had help from ChatGPT in this.
Lawyer Steven Schwartz also comically stated that he had asked the chatbot to verify the verdicts, unaware that ChatGPT could provide false information. Lawyer Schwartz expressed great regret for using ChatGPT. He also said that he would never do so in the future without verifying its accuracy. The judge hearing the case has scheduled a hearing for June 8 to discuss possible sanctions due to the unprecedented situation created by Schwartz’s actions.
{{user}} {{datetime}}
{{text}}