With the development of artificial intelligence technologies, the problem of hallucinations has become more visible. The fake information that language models provide with confidence worries not only technology enthusiasts but also the institutions that invest in these systems.
As artificial intelligence develops, it sees more hallucinations
Moreover, this problem increases rather than decreases in more advanced versions of the models. According to a published report, the models called “o3” and “o4-mini” introduced by OpenAI last month produced hallucinations at a rate of 33% and 48%, respectively, in internal tests.

These rates represent an almost two-fold increase compared to previous models. Similarly, models developed by companies such as Google and DeepSeek also tend to produce erroneous information. The source of the problem lies not so much in the architecture of the models as in the lack of a full understanding of how these systems work.
According to Vectara CEO Amr Awadallah, it is inevitable that artificial intelligence will produce hallucinations. Awadallah emphasizes that it is not possible to completely eliminate this situation. According to experts, this is a risk that can have serious consequences not only for end users but also for companies that rely on these technologies.
Unreal information can lead to wrong decisions, beyond undermining user trust. The use of synthetic data stands out as one of the reasons for this increase in the system. Due to the insufficiency of real-world data, many companies have started to use data produced by artificial intelligence in model training.
However, models trained with this data can reinforce existing errors by multiplying them. So what do you think about this issue? You can share your views with us in the comments section below.