Four of the biggest companies in artificial intelligence have decided to join forces. The new group, Frontier Model Forum, will be formed by Microsoft, OpenAI, Google and Anthropic and will work for the safety and accountability of AI.
Google, Microsoft and more take a step towards AI security
The Biden administration, which hosted technology giants at the White House the other day, finally succeeded in taking decisive steps. The Frontier Model Forum, established after this meeting, will be responsible for the development of secure artificial intelligence.
Frontier’s AI models will be much more advanced than those of a specific company. They are described as “machine learning models that can perform a wide range of tasks”. The forum aims to advance security research, develop and deploy models responsibly.
The initial focus of Microsoft, OpenAI, Google and Anthropic will be on security standards for advanced AI tools. For this, a board will be formed with a number of advisors from each company. The group will also share information on model safety with companies and governments.
As part of the announcement, Microsoft President Brad Smith underlined the technology industry’s responsibility to ensure that AI remains “safe, secure and under human control”. On the other hand, he stated that the Frontier Model will be open to the research of other companies.
Developments in artificial intelligence bring with them potential risks and concerns. Frontier Model Forum wants to prevent these problems by establishing cooperation and security guidelines for companies.
What do you think about this issue? Don’t forget to share your opinions with us in the comments!