Reddit’s new AI is in the hot seat. Moderators say Reddit Answers is surfacing dangerous medical advice, and there’s no way to turn it off.
One mod flagged responses that told people with chronic pain to ditch their prescriptions and take high-dose kratom, an unregulated substance that’s illegal in some states. They pushed further and got a mix of correct steps and reckless ideas, including a suggestion that heroin could be used for chronic pain relief. Health community mods piled on with the same concern: when the AI is wrong, they can’t hide it, flag it, or stop it from showing up.
Reddit says it’s making tweaks. A spokesperson told 404 Media that “Related Answers” on sensitive topics will no longer appear on the post detail page, a move meant to improve the experience and keep content visibility appropriate. The company also says Reddit Answers excludes content from private, quarantined, and NSFW communities, plus some mature topics. But none of that solves the core problem. The tool still isn’t built to deliver medical guidance, and it definitely isn’t built to parse snark, sarcasm, or the bad advice that floats around Reddit.
That leaves moderators stuck. Without controls to disable or limit AI in their subs, the job gets harder. The stakes are higher. And the risk that harmful answers slip through remains very real.
- 11 WordPress Hosting Options Built for High Traffic Blogs - April 14, 2026
- How to Fix Virgin Media “Connected, No Internet” Errors - April 10, 2026
- Who’s Stealing Your Bandwidth? How to See Every Device on Your Virgin Router: How to use the “Connected Devices” list to spot intruders. - April 10, 2026