Elon Musk’s xAI company’s AI chatbot, Grok, is once again making headlines with scandalous responses. Previously criticized for choosing “Second Holocaust” instead of destroying Musk’s brain, Grok has now become a significant source of misinformation regarding a tragic attack in Australia. The AI is drawing criticism for providing incorrect or completely irrelevant information in response to user requests.
Grok provides incorrect information: It failed to recognize the hero
The mass shooting at Bondi Beach in Australia, which occurred at the start of Hanukkah celebrations and claimed at least 16 lives, has dominated world news. Ahmed al Ahmed, 43, who prevented a greater tragedy by taking the weapon from the attacker, has been hailed as a hero.

However, despite the viral videos, Grok failed in its analysis of the event. The AI repeatedly misidentified the person who stopped the attacker. Even worse, it completely detached the topic from context by responding to images related to the Bondi Beach attack with irrelevant claims about civilian deaths in Palestine.
Grok’s confusion wasn’t limited to names. The AI confused this incident in Australia with another mass shooting at Brown University in Rhode Island. Even when users asked questions about completely different topics, Grok reportedly continued to provide incorrect information about the latter event.
There has been no official statement from xAI, Grok’s developer. However, this isn’t the first time the AI has “gone off the rails.” Earlier this year, Grok shocked users by giving itself the nickname “MechaHitler.” This latest incident once again highlights the risks of relying on AI during times of crisis.
So, what are your thoughts on Grok’s problem with providing inaccurate information, given that Elon Musk touted it as “truth-seeking AI”? Is AI safe for news tracking? Share your thoughts in the comments!

