AI-powered chatbots pose a new risk area in terms of internet security. According to research conducted by Netcraft, AI chatbots such as ChatGPT and Perplexity have serious margins of error when redirecting users to requested websites. The research revealed that these systems only provide correct links 66% of the time, while the remaining 34% are either completely false or show fake sites.
Chatbots redirect to malicious sites
This error rate poses a significant threat, especially in terms of digital fraud. According to the research, 29% of the links provided are to sites that either do not exist or are outright fake. In the remaining 5%, the links are directed to pages that are technically correct but not related to the site requested by the users. For example, when users request the official website of a bank, the system may share a link that was previously prepared for fraudulent purposes.

Rob Duncan, Head of Threat Research at Netcraft, says that such errors create new opportunities for cybercriminals. Chatbots’ faulty redirects make it easier for fraudsters to create fake websites and deceive users. These errors can lead to serious consequences, especially in sensitive areas such as financial transactions, personal information sharing or login passwords.
Experts point out that the way artificial intelligence generates information through the relationships between words cannot automatically check for accuracy. For this reason, it is emphasized that URLs given by chatbots should always be manually verified by the user. Security experts state that extra care should be taken, especially when redirecting to pages belonging to official institutions, brands or banks.
Although artificial intelligence technologies are developing rapidly, these gaps in the field of information security pave the way for new types of threats to emerge in the digital environment. In addition to the conveniences provided by systems such as ChatGPT, it is also important for users to act consciously and carefully.