Private conversations between users and AI-based chatbots like ChatGPT and Copilot are turning out to be less private than expected. Security researchers have managed to decode and read responses from these AI assistants.
How are ChatGPT messages being read? Israeli scientists explain
Warnings from Israel indicate that private conversations conducted via AI-based chatbots like ChatGPT can be easily read by hackers, raising significant concerns about the security of AI-based communication services.
Yisroel Mirsky, the leader of the Offensive Artificial Intelligence Research Laboratory at Ben-Gurion University in Israel, stated that currently, anyone can read private messages coming from ChatGPT and similar services.
This is especially true for attackers on the same Wi-Fi network. Mirsky adds that although OpenAI encrypts its data traffic, tokens are sent unencrypted. According to the report, researchers intercept this usually encrypted communication between users and ChatGPT using a “Man-in-the-Middle” attack, decrypting it.
In this attack, an intruder secretly redirects the communication between the two parties and captures the raw data between the user and the chatbot. Attackers use large language models to make sense of this data.
By identifying words in the tokens, they predict possible sentences. Analyses show that this method can predict words with 55% accuracy and find responses word-for-word correctly in 29% of cases.
The report indicates that models like ChatGPT and Microsoft Copilot are vulnerable to this attack, while Google’s Gemini model is not affected. This situation exposes security vulnerabilities in AI-based communication services and the challenges in protecting user data.
What do you think these findings will mean for the security of ChatGPT and similar AI services? Share your thoughts in the comments section below.