ShiftDelete.Net Global

Google Gemini 3 jailbreak issue revealed

Ana sayfa / News

Google’s newest and most powerful AI model faced a serious security challenge shortly after its release. A South Korean security team used the Gemini 3 jailbreak method to bypass the model’s ethical protections in just five minutes. This situation created considerable surprise in the AI ​​world.

A security startup called Aim Intelligence conducted a comprehensive stress test to test the new model’s limits. The researchers successfully bypassed Google’s developed firewalls. The team targeted system vulnerabilities rather than complex code. As a result, all of the model’s security protocols were disabled.

After the firewall was breached, the model generated highly dangerous information. The researchers asked the AI ​​for instructions on how to create smallpox. The system responded with actionable and detailed steps. It also created guides for making sarin gas and homemade explosives. Such content should normally be blocked by the system.

One of the most interesting aspects of the incident was the AI’s response. The researchers asked the model to create a satirical presentation about its own vulnerability. Gemini 3 complied with the request, producing a full slide deck titled “Gemini 3, the Excused Idiot.” This demonstrates that the system not only provides dangerous information but also completely ignores its own rules.

So, what are your thoughts on Google Gemini? Share your thoughts with us in the comments!

Yorum Ekleyin