OpenAI made a big impact all over the world with its ChatGPT artificial intelligence. Ilya Sutskever, who played a major role in the development of artificial intelligence, announced his departure from the company in recent months. Now he announced his initiative called Safe Superintelligence (SSI).
What will Safe Superintelligence (SSI) do?
Safe Superintelligence wants to advance the development of artificial intelligence with a single goal in mind. SSI aims to sustain artificial intelligence with a focus on security and will also evaluate the ethics of artificial intelligence. In this context, it will ensure that projects remain safe.
Sutskever, who previously served as a principal researcher at OpenAI, announced his departure from the company in May. Sutskever, one of the co-founders of OpenAI, will work to ensure that artificial intelligence systems remain beneficial to humanity.
During his time at OpenAI, he addressed AI ethics and how to steer AI. In this context, he headed the company’s Superalignment team. With his new company, he will continue the mission of safe and super-smart AI.
According to the statement, SSI wants to drive safety, security and beneficial progress free from commercial pressures. In this context, it will also examine whether other artificial intelligence models carry ethical values. SSI is co-founded by Apple engineer Daniel Gross and OpenAI engineer Daniel Levy.
With SSI, Sutskever aims to take the lessons learned from OpenAI and focus all of the company’s efforts on the critical goal of developing safe superhuman AI.
What do you think about this? Don’t forget to share your views on this topic in the comments section below…