A significant ChatGPT security vulnerability, dubbed ‘ZombieAgent,’ was recently discovered by cybersecurity firm Radware. This critical flaw in OpenAI’s Deep Research tool allowed attackers to steal sensitive data without any user interaction, operating silently within OpenAI’s cloud infrastructure.
How the “ZombieAgent” ChatGPT Security Vulnerability Worked
Attackers exploited this vulnerability by embedding malicious instructions into ordinary documents or emails. When ChatGPT’s autonomous research tool processed these files, the malware activated automatically, without the user needing to click any links. Once triggered, the exploit could quietly access and collect data from the user’s mailbox, read sensitive files, and transmit this information to external servers.
Cybersecurity firm Radware reported the issue to OpenAI on September 26, 2025. In response, the company addressed the serious flaw by releasing a patch on December 16, 2025. The most alarming aspect of this attack is that it occurs entirely within OpenAI’s cloud environment, making it invisible to traditional corporate firewalls and endpoint protection systems.
A Persistent and Spreading Threat
What sets ZombieAgent apart from other exploits is its ability to directly implant malicious rules into the AI’s long-term memory. This allowed attackers to maintain persistent control over the system without needing to interact with the target user again. Furthermore, researchers highlighted that the vulnerability could spread autonomously to other individuals within an organization, creating a risk similar to a fast-spreading ‘worm-type campaign.’

This novel data exfiltration method also successfully bypassed security measures OpenAI had previously implemented for a different flaw known as ‘ShadowLeak.’ While OpenAI had prevented ChatGPT from dynamically changing link addresses, ZombieAgent used a series of pre-made URLs, each ending with a different character, to leak data one character at a time. Radware officials warn that businesses must consider these structural weaknesses when granting autonomous AI tools access to sensitive systems.
So, what are your thoughts on AI tool security? Share your opinions with us in the comments!

