Google is generally known for taking user security seriously and implementing various measures to keep its products secure. However, it has been revealed that the company is less than willing to address a problem discovered in its AI tool Gemini, which leaves it vulnerable to a concerning type of cyberthreat. This means Gemini users’ sensitive information could be at risk.
Gemini vulnerability could expose your personal information
A test conducted by security researcher Viktor Markopoulos revealed that Gemini is vulnerable to a type of cyberattack known as “ASCII smuggling.” This attack involves hiding malicious AI commands within text, such as emails or calendar invitations. An attacker can hide the command from the user by writing it in a very small font or in the same color as the text.

When a user asks an AI tool like Gemini to summarize the text, the AI then reads and executes the invisible command. This can have very dangerous consequences. For example, a hidden command could instruct AI to find sensitive information in your inbox, such as a credit card number or password, and send it to someone else. The risk is even greater considering that Gemini is now integrated with Google Workspace (Gmail, Calendar, etc.).
Markopoulos reported these findings to Google and even provided a demonstration in which he disguised a command that tricked Gemini into redirecting the user to a malicious website with the promise of a discounted phone. However, Google dismissed the report, stating that it considered the issue a “social engineering tactic” rather than a security flaw. The company suggests that the responsibility ultimately lies with the user.
Google’s response suggests that it does not plan to release a patch to fix this security issue in Gemini anytime soon. According to the research, while popular AI tools ChatGPT, Claude, and Copilot have protections against such attacks, Gemini, Grok, and DeepSeek are vulnerable.
This situation reiterates general concerns about AI security. Google’s decision to leave the responsibility to the user has sparked debate in tech circles. What are your thoughts on Google’s approach? Do you have any security concerns when using AI tools?