The recent introduction of Grok, Elon Musk’s new AI chatbot, has sparked controversy due to its unusual and, for some, inappropriate features. Musk, known for his successful ventures in the tech sector such as Tesla and SpaceX, introduced Grok as a “rebellious” AI with real-time access to information via the X platform, setting it apart from competitors such as OpenAI’s ChatGPT and Google’s Bard.
The unexpected thing was that Grok gave a drug recipe
What drew attention and raised concerns was Musk’s decision to showcase Grok’s rebellious nature by sharing instructions on how to turn coca leaves into drugs.
This controversial move seems to contradict Musk’s recent statement at a global AI security summit, where he emphasized AI as “one of the greatest threats to humanity.” This apparent contradiction has confused observers about Musk’s stance on the responsible development and use of AI.
The chatbot’s step-by-step guide to making drugs included sarcastic statements such as “once you start cooking, I hope you don’t blow yourself up or get arrested”. Musk followed this with a more detailed screenshot offering instructions for producing class A drugs from its raw materials. The move raised ethical concerns as it could be perceived as encouraging illegal activities and substance abuse.
Developed by Musk’s new artificial intelligence company xAI, Grok claims to have a significant advantage over other models due to its real-time access to data on Musk’s social media platform. Despite Musk’s endorsement of Grok’s rebellious nature and cynicism, critics argue that encouraging illegal activities goes too far, especially considering that Musk just days ago acknowledged AI as a significant threat to humanity.
The chatbot, which is currently only available to paid subscribers of X Premium, is expected to release a standalone app in the future. xAI claims that Grok outperforms freely available competitors like ChatGPT and Bard, but acknowledges that it lags behind premium versions like OpenAI’s GPT-4.
Musk’s decision to launch an AI chatbot with controversial features adds a new layer to the ongoing debate over responsible AI development, raising questions about the ethical limits and potential consequences of pushing the envelope in the pursuit of technological advancement. What do you think? Please share your thoughts with us in the comments.