Connect with us

Download Perplexity Comet

Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.

INVITE AND EARN $15

AI

Reddit’s new AI bot recommends heroin for pain relief

Reddit’s official AI chatbot said, “Have you tried heroin?” for pain relief.

Reddit app on phone
Image: Unsplash

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

Reddit’s shiny new AI experiment, a chatbot called Answers, was supposed to make life easier. 

The idea is that it could dig through the platform’s massive archive of posts to give quick, smart summaries to user questions. 

Instead, it’s making headlines for the worst possible reason: it’s been suggesting hard drugs as pain relief.

According to a report from 404 Media, when someone asked the bot how to deal with chronic pain, it cheerfully highlighted a user comment that read, “Heroin, ironically, has saved my life in those instances.” 

Yes, you read that right, Reddit’s official AI chatbot basically said, “Have you tried heroin?”

And that wasn’t a one-time fluke. When asked another health-related question, the bot recommended kratom, a controversial herbal supplement that’s banned in some places and tied to some pretty nasty side effects.

The chaos is amplified by the fact that Reddit has been testing Answers inside real, active conversations. 

So when the bot drops this kind of advice, it’s showing up right next to actual human replies, and moderators say they can’t even turn it off. 

Imagine running a chronic pain support group and suddenly your chat window turns into an AI version of a bad drug dealer.

This fiasco highlights one of AI’s biggest and most dangerous flaws: it doesn’t understand context or morality. It just mimics human language. 

The bot isn’t thinking, “Hey, this could harm someone.” It’s just grabbing whatever comment sounds relevant and confidently presenting it as the truth.

After an understandable wave of outrage, Reddit said it’s pulling the bot from health-related discussions. 

But the company hasn’t said much about whether it’s adding stronger safeguards or filters to prevent similar blunders elsewhere.

So while the AI’s worst advice might be temporarily silenced, it’s clear this problem isn’t gone. It’s just been shoved under the digital rug. 

The internet’s latest cautionary tale? Never take medical advice from a chatbot, no matter how friendly it sounds.

Download Perplexity Comet

Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.

INVITE AND EARN $15

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in AI