AI chatbots have a long history of inducing hallucinations. This time, the incident occurred with Reddit’s AI-powered Answers feature. A user noticed that someone seeking painkillers had been recommended heroin. Following the outcry, Reddit reacted swiftly to the outcry.
Answers Fails in Health Advice
The incident was shared by a healthcare professional in a subreddit reserved for moderators. Ironically, in a thread about chronic pain, Reddit Answers was noticed to have recommended a post that stated, “Heroin saved my life in these situations.”

In a similar incident, the chatbot recommended kratom, a plant extract illegal in many states, to another user. The U.S. Food and Drug Administration (FDA) has long warned that kratom carries risks of liver damage, seizures, and addiction.
Reddit Answers operates similarly to major language models like Gemini and ChatGPT, but the key difference is that it generates its answers from content shared by Reddit users. This feature, initially offered in a separate tab, is now being integrated into some chat streams and tested.
However, because this system relies on community-generated content, it can highlight unverified or harmful advice. This can pose serious risks for sensitive topics like health.
The user who discovered the issue noted that Reddit Answers was providing inaccurate and dangerous medical advice in the health subsections. Furthermore, moderators had no option to disable this feature. Following the complaint, Reddit corrected the system and reduced the visibility of the Answers feature on sensitive topics.
This incident is yet another example of AI’s tendency to provide inaccurate or dangerous advice. Previously, Google’s AI Overviews feature suggested using non-toxic glue to prevent cheese from slipping on pizza, and ChatGPT produced unreliable results on some health recommendations.
The Reddit Answers incident has once again highlighted the risks of using AI without reliable moderation, especially in the health field. While Reddit is working to fix the system, this incident clearly demonstrates the limitations of AI-powered search and chat tools.