As artificial intelligence (AI) continues to make strides in consumer applications, concerns over its legal implications are starting to surface. In particular, the issue of whether AI can be sued for defamation is gaining attention following a recent lawsuit filed against OpenAI, the developers of AI chatbot ChatGPT.
Chatbot defamation lawsuit against OpenAI raises questions about AI accountability
ChatGPT had become the fastest-growing consumer app in history, with 100 million users within two months of its launch earlier this year. However, it soon faced allegations of plagiarism and inaccuracies in the answers it provided. In March, a landmark defamation claim was filed against ChatGPT and OpenAI by an Australian man, Brian Hood, who alleged that the chatbot falsely accused him of involvement in a bribery scandal.
Hood had actually played the role of whistleblower in exposing the crimes committed by two Australian financial institutions owned by the Reserve Bank of Australia, Securency and Note Printing Australia, which were fined a total of AU$21 million. Yet, when asked about Hood’s involvement in the scandal, ChatGPT falsely suggested that he had been one of the individuals accused of involvement and had pleaded guilty to charges of conspiracy to bribe foreign officials.
Lawyers acting on Hood’s behalf filed a concerns notice with OpenAI in late March, but allegedly received no response. While the ChatGPT interface includes a disclaimer warning of potential inaccuracies, some have suggested that such cases are indicative of the inherent flaws in AI tools like ChatGPT.
Chatbot defamation and the growing need for accountability in technology industry
According to Professor Geoff Webb of Monash University, large language models like ChatGPT “echo back the form and style of massive amounts of text on which they are trained” and may “repeat falsehoods that appear in the examples they have been given”. While the newest GPT update, GPT-4, boasts a 40% improvement in factual accuracy on adversarial questions, it remains to be seen whether this will be enough to avoid future legal issues.
The case raises important questions about the responsibility of AI developers in ensuring that their tools do not cause harm. While the law surrounding AI and defamation is still developing, it is clear that AI is not immune to legal action. As AI continues to evolve and become more prevalent in everyday life, it is likely that the legal implications will become even more complex.
In the meantime, it is essential that AI developers take proactive steps to minimize the risk of harm caused by their tools. This includes not only improving the accuracy of their models but also implementing effective measures to address concerns raised by users. Failure to do so could result in not only legal consequences but also damage to the reputation and trustworthiness of the AI industry as a whole.
{{user}} {{datetime}}
{{text}}