OpenAI has developed a new scale to measure how far AI models have progressed towards achieving human-like intelligence. This scale aims to assess the development of models on five levels and aims to better track the process of reaching artificial general intelligence (AGI). However, this scale also raises some concerns.
How does the new OpenAI AI scale work?
This new scale developed by OpenAI measures the progress of AI on five levels. Existing language models such as ChatGPT are in the first level, while the second level includes an AI with problem-solving capabilities equivalent to human intelligence. The third level aims for an AI that can perform tasks without user intervention. The fourth level means an AI that can generate new ideas and concepts. At the fifth and final level, an artificial intelligence that can take on the tasks of entire organizations, not individuals.
Sam Altman, CEO of OpenAI, stated that they are on the verge of reaching the second level and said that they can realize this level with GPT-5. However, this progress raises ethical and security questions.
Security and ethical issues
The goal of achieving AGI raises not only technical but also ethical and security issues. OpenAI’s disbanding of its security team and the departure of some senior researchers raise concerns in this regard. In May, high-level researchers left their posts, citing disregard for the company’s security culture.
If AI reaches these levels, it could have major impacts on society. Therefore, this new scale developed by OpenAI has to take into account not only technical progress, but also ethical and safety standards.
OpenAI’s new scale is an important step in monitoring and evaluating the progress of artificial intelligence, but this progress must also be carefully considered in light of the ethical and security questions it raises. Developments in “OpenAI AI” are of great technical and societal importance.