A dangerous new AI chatbot called GhostGPT has emerged on the dark web. Cyber criminals now use this tool to create malware.
Security firm Abnormal Security revealed the threat in a recent investigation. The company warns that GhostGPT removes safety barriers found in legitimate AI tools.
Operates without ethical restrictions
The chatbot operates without ethical restrictions. It helps criminals write malicious code and craft convincing phishing emails.
“GhostGPT strips away all safety measures built into normal AI systems,” says Abnormal Security in their report. “This allows direct access to harmful capabilities.”
Criminals access the tool through Telegram for a mere $50 weekly fee. A full three-month subscription costs $300.
The bot likely uses modified versions of existing AI models. Developers removed protective barriers to enable malicious activities.
Security researchers tested GhostGPT’s capabilities firsthand. The bot quickly produced a convincing phishing email template.
GhostGPT joins a growing family of criminal AI tools. WormGPT appeared in 2023, followed by WolfGPT and EscapeGPT.
The tool’s creators attempt to dodge legal issues. They market it as a “cybersecurity tool” despite selling it on criminal forums.
“These disclaimers fail to hide the tool’s true purpose,” Abnormal Security states. “The focus on email scams reveals its criminal nature.”
GhostGPT’s simple interface poses a serious threat. Even inexperienced hackers can now launch sophisticated attacks.
Requires no technical setup or special knowledge
The bot requires no technical setup or special knowledge. Users simply pay the fee and start creating malicious content.
Security experts warn about rising AI-powered threats. Traditional security tools may fail to catch AI-generated attacks.
“This marks a turning point in cybercrime,” the report concludes. “Security teams must adapt to counter AI-enhanced threats.”
The emergence of GhostGPT signals a dangerous trend. Criminal groups increasingly harness AI to boost their capabilities.
Abnormal Security urges organizations to strengthen their defenses. They recommend updating security systems to detect AI-generated threats.
Law enforcement agencies track these AI tools closely. However, their availability on encrypted platforms makes shutdown efforts difficult.
The rise of AI crime tools forces a security rethink. Experts say conventional defenses may not stop AI-powered attacks.
Organizations face mounting pressure to upgrade security measures. The fight against AI-enhanced cybercrime enters a new phase.