Google’s recently launched AI Looks feature in search results caused a brief public uproar in Australia. One of the system’s responses claimed that users in the country were subject to official identity verification for internet access, and that this process would be handled by an Israeli-based company called AU10TIX. Although unverified, this information quickly spread on social media and sparked public panic.
Google’s new feature sparked panic in Australia
Following the incident, the vx-underground community, known for its cybersecurity research, intervened. Investigations revealed that the claim was completely false. Official Australian government websites do not require such an identity for internet use, nor does they contain any reference to AU10TIX.

Hallucination is the process by which artificial intelligence systems generate false information and present it as trustworthy. This example, which occurred in Google’s system, demonstrates that hallucination goes beyond a mere technical error and can have serious consequences.
The impact of AI-based information systems on public perception, coupled with users’ tendency to unquestioningly accept every piece of information from these systems, has fueled the growing panic.
Google has not issued any official statement regarding the incident. However, in the digital environment where such systems are increasingly widespread, the reliability of AI in information generation has once again been brought into question. This development highlights the need for stricter global measures to regulate AI-based content and enhance user information literacy.

