OpenAI, which developed the ChatGPT system, has launched a “Bug Bounty Program”. In this way, the company wants to include all security researchers in the process. OpenAI is taking a very important step to make its systems more secure within the scope of the “Bug Bounty Program” here. Within the scope of the program, security experts who find vulnerabilities in OpenAI’s systems will be able to receive prize money between 200 and 20 thousand dollars.
Rewards are “low-severity findings” starting at $200 and “exceptional discoveries” going all the way up to $20,000
However, it should be noted that not all issues reported to OpenAI will warrant a cash prize. Jailbreaking or something negative that ChatGPT can tell you does not bring a reward because it is not counted as a bug. OpenAI started this bug bounty program to improve the model in terms of privacy and security. Last month, ChatGPT showed some users chat threads to other users. The system was downed and such a bug was encountered. This was an important privacy violation. The company continues to improve itself to prevent this and similar incidents from happening again.
In the statement made by the company on this subject, “OpenAI’s mission is to create artificial intelligence systems that benefit everyone. To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we understand that vulnerabilities and flaws can emerge.
We believe that transparency and collaboration are crucial to addressing this reality. That’s why we are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems. We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information. Your expertise and vigilance will have a direct impact on keeping our systems and users secure.” was said.
The company is following a really rational and logical path in this regard. Because technological systems (especially artificial intelligence) are now really complex. For this reason, even those who produce the system may not be aware of some security vulnerabilities. Here, there may be vulnerabilities that other people may find.