Meta is making a significant change in the product security processes on its platforms such as Instagram and WhatsApp. Now, the majority of product risk assessments will be carried out by artificial intelligence systems instead of humans.
Meta is focusing on artificial intelligence in its security process
Meta’s new system will analyze the potential risks of the developed product or features and quickly inform the teams. The company plans to conduct 90 percent of all assessments through this automated system.

Meta has been seen to have significantly increased its use of artificial intelligence in the last two months. In the new process, product development teams fill out a survey for each feature before it is presented to the user and transmit this data to the artificial intelligence.
The system detects the potential negativities that the product may cause and provides feedback. It is expected that the teams will make the necessary arrangements before going live in line with this automated assessment.
However, this change has also brought some discussions within and outside the company. Both current and former employees state that AI may not always work with the desired level of precision.
It was stated that content that may have negative effects on young users in particular may not be correctly distinguished by AI systems. It is emphasized that bullying, violence or socially sensitive topics are at risk of being overlooked without human supervision.
In response to these concerns, Meta states that the system is not completely left to AI, and that humans will still be involved in complex and high-risk situations. So what do you think about this issue? You can share your views with us in the comments section below.