In recent days, serious warnings have come from artificial intelligence experts regarding the possibility that current AI models could become more intelligent and powerful than the human mind and potentially capable of destroying the world. The experts stated that in order to prevent the mentioned negative scenarios, some limitations should be applied in AI research. Here are the details:
Artificial intelligence could threaten the human race!
In a statement published on the website of the Artificial Intelligence Safety Center, a group of scientists and leaders in the technology sector stated that “Reducing the risk of extinction caused by artificial intelligence should be a global priority, along with other risks affecting society as a whole, such as pandemics and nuclear warfare.”
Prominent figures such as Sam Altman, CEO of OpenAI, a research laboratory supported by Microsoft, and Geoffrey Hinton, known as the “father of AI” who recently left Google, also signed the statement expressing that we are on the verge of an artificial intelligence crisis.
The number of calls for imposing limitations on artificial intelligence systems has been increasing as internet users embrace new-generation models. Another declaration published in March, currently supported by over 30,000 people, requested a six-month pause in the training of more powerful AI systems than GPT-4, signed by leading figures and researchers in the technology sector. The declaration explicitly stated, “Advanced artificial intelligence can represent a profound change in the history of life on Earth, and this change should be planned and managed with appropriate attention and resources.”
AI could be more powerful than its creators!
Geoffrey Hinton, who holds an important position in the development of artificial intelligence, stated in an interview that AI models tend to surpass their developers sooner than expected. According to Hinton, while it was previously thought that this would happen in 30 to 50 years, now it may be only a short period of 5 years.
An interesting statement on the subject also came from Dan Hendrycks. As the director of the Artificial Intelligence Safety Center, Hendrycks stated that AI carries critical risks in the near future, such as “systematic biases, misinformation, malicious use, cyberattacks, and militarization.”
So, what do you think about this issue? Do you believe that AI can bring about the end of the human race? You can share your opinions with us in the comments section.
{{user}} {{datetime}}
{{text}}