A new study has uncovered a surprising trend in AI search engines. These systems appear to be showing lesser-known, low-traffic websites as sources, unlike traditional searches like Google.
How do AI search engines work differently?
Researchers from Ruhr University Bochum and the Max Planck Institute investigated this issue. The team compared Google’s standard search results with Google’s AI Overviews, Gemini 2.5 Flash, and GPT-4o’s web search mode responses.

The analysis revealed that AI systems exploited sites that ranked much lower in popularity. In many cases, these sources didn’t even appear in Google’s top 100 results for the same query. For example, Gemini’s sources had a median domain rank above 1,000.
However, this shift to low-traffic sites didn’t appear to harm information quality. According to the study, GPT-based models avoided social media and favored corporate sites and encyclopedic content. The level of information diversity remained similar to traditional search.
However, one weakness has been identified. Because these systems summarize information, some small or obscure details can be lost. It has also been noted that AI tools struggle to provide up-to-date information on time-sensitive or rapidly changing topics.
So, what are your thoughts on AI changing search habits? Share your thoughts with us in the comments!

