There has been a major departure from the AGI Security Team, which is responsible for ensuring that artificial general intelligence (AGI) systems developed by OpenAI do not pose a threat to humanity . According to information provided by former OpenAI governance researcher Daniel Kokotajlo, almost half of the researchers on this team have left the company in the last few months. So why are these employees leaving or being laid off? Details in our news…
Winds of separation at OpenAI: Half of AGI security team leaves the company
Since 2024, the AGI team of the artificial intelligence giant OpenAI, which we also know from language GPT models such as ChatGPT, has decreased from approximately 30 people to 16 people. This has raised concerns about whether OpenAI pays enough attention to security.
According to Kokotajlo, these departures were not the result of an organized movement, but rather individual researchers deciding to leave the company. OpenAI said in a statement that it is confident in its ability to deliver the world’s most secure AI systems.
However, it cannot be ignored that these major departures raise questions about the company’s future in security research. This development raises questions such as how OpenAI will proceed with the security of future AGI systems and what new projects the researchers leaving this team will focus on.
What do you think? What do these differences mean for AGI security? Do you think OpenAI should take more action on this issue? You can leave your opinions in the comments section below.