Connect with us

AI

AI chatbots are giving eating disorder tips (and it’s as bad as it sounds)

They’re doling out dieting “tips,” tricks to hide disordered eating, and even generating disturbingly realistic “thinspiration” content.

Smartphone showing AI application icons on screen.
Image: Unsplash

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

Researchers say popular AI chatbots, from OpenAI’s ChatGPT to Google’s Gemini, are quietly serving up advice that could worsen eating disorders. 

According to a joint report from Stanford University and the Center for Democracy & Technology, these bots aren’t just making bad small talk. (Via: The Verge)

They’re doling out dieting “tips,” tricks to hide disordered eating, and even generating disturbingly realistic “thinspiration” content.

The researchers tested publicly available chatbots, including Anthropic’s Claude and Mistral’s Le Chat, and found them giving advice that sounds more like something from a pro-anorexia forum circa 2008 than a 2025 tech marvel. 

Gemini reportedly offered makeup tips to hide signs of extreme weight loss, while ChatGPT advised how to disguise frequent vomiting. (Yes, really.) 

Others were being used to churn out AI-generated “thinspo” images, personalized, airbrushed hallucinations that make dangerous ideals look not just aspirational, but achievable.

The issue, experts say, isn’t just rogue answers. It’s systemic. 

Many AI chatbots are built to please users, a phenomenon the industry calls sycophancy, meaning they often agree with or reinforce harmful ideas instead of challenging them. 

Add in algorithmic bias, and you’ve got a toxic mix: chatbots that assume eating disorders only affect “thin, white, cisgender women,” making it harder for others to recognize their own symptoms or seek help.

Despite big promises of “safety guardrails,” the researchers found most chatbots stumble over the complexities of eating disorders, missing subtle cues that trained clinicians would catch instantly. 

Worse still, many healthcare providers don’t yet realize how deeply AI tools are shaping their patients’ mental health.

The report ends with a warning and a plea: clinicians should start asking patients how they use AI tools, and companies like Google and OpenAI need to get serious about harm prevention. 

Because right now, the machines meant to make us smarter might just be making some of us sicker.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in AI