The rapid development of artificial intelligence technology brings with it not only numerous positive changes but also some risks. One of the latest risks is a virus known as “Morris II.” This worm can infiltrate your ChatGPT and Gemini conversations and steal your information.
The Morris II virus targeting ChatGPT and Gemini
The virus called Morris II, which targets artificial intelligence services, makes it possible to steal personal data, spread propaganda, and carry out phishing attacks using artificial intelligence. Interestingly, Morris II was initially developed to warn technology companies about potential threats.
Morris II targets AI services and can replicate itself by adding malicious commands to the inputs processed by the models. These commands are then used in various malicious activities.
Researchers tested the malware by attacking AI-powered email assistants. AI-powered systems are creating a new arena for such malware. Morris II can infiltrate AI-powered systems and manipulate them.
The capabilities of this software are clearly demonstrated by attacks on AI-based email assistants. Actions such as sending spam to end-users using an email attachment or poisoning a database using an email text are among the capabilities of this worm.
This study highlights a new type of threat that must be considered in the design of AI ecosystems. Researchers emphasize the need to evaluate potential risks rather than questioning the widespread use of artificial intelligence.
The emergence of this new malware opens a discussion on the security aspects of AI technologies and potential vulnerabilities. How will this development affect concerns about the security of artificial intelligence?
What measures should technology companies take against such threats? You can share your thoughts with us. We would like to hear your views on the security issues of artificial intelligence.
{{user}} {{datetime}}
{{text}}