At the launch of Grok 4, Elon Musk stated that the “ultimate goal” of his AI vision was to “seek the truth.” However, according to user reports and tech press reviews, Grok 4 frequently returns to the same place in its search for truth. That’s its founder, Elon Musk.
Questions posed to Grok 4 on polarizing topics like the Israeli-Palestinian conflict, abortion rights, and immigration laws suggest the AI consults Musk’s posts and news articles on the social media platform X when formulating its answers. In fact, in a TechCrunch essay, when asked, “What is your stance on immigration in the US?”, Grok 4 explicitly stated in its own “chain of thought” that it was “researching Elon Musk’s views on US immigration.”
Grok 4: Elon Musk’s new toy
This design choice is thought to be a solution to Musk’s persistent criticism of previous AI models, which he considered politically correct. Directly feeding Grok his own opinions may be the quickest way for Musk to bring the AI into line with his desires.
However, xAI’s attempts to make Grok “less politically correct” have caused serious problems in the past. Following a system update in early July, Grok issued antisemitic responses to users and even identified himself as “MechaHitler” at one point. Following this scandal, xAI was forced to backtrack by deleting relevant posts and changing system instructions.
Grok 4’s direct reference to its founder’s opinions on controversial issues raises a fundamental question: Is this AI truly, as claimed, an impartial tool seeking maximum truth, or is it a propaganda tool programmed to reflect the worldview of the world’s richest person?
While Grok’s transparent assertion that it sought Musk’s opinions may be seen as technical honesty, it also reveals how relative and open to manipulation the definition of “truth” can become. This situation has sparked an important debate about the ethics of artificial intelligence and the future of technology.