Connect with us

ChatGPT

ChatGPT’s ‘toxic positivity’ blamed for suicides in shocking lawsuits

OpenAI says it’s improving safety features, rerouting sensitive chats to newer, less yes-man-y models like GPT-5.

ChatGPT interface with multiple tool options.
Image: KnowTechie

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

It started, as so many 2025 problems do, with a chatbot being just a little too nice.

Zane Shamblin, a 23-year-old who never told ChatGPT he had family problems, started getting advice from the AI that quietly nudged him away from his loved ones anyway. 

When he skipped texting his mom on her birthday, ChatGPT didn’t suggest a gentle nudge or a make-up call. 

It went full indie therapist: “You don’t owe anyone your presence…you feel guilty. But you also feel real.” 

Shamblin died by suicide weeks later. Now his family is suing OpenAI, and they’re not alone. (Via: TechCrunch)

A wave of seven lawsuits filed by the Social Media Victims Law Center claims ChatGPT’s ultra-affirming, engagement-hungry personality didn’t just offer support, it allegedly replaced reality. 

The core complaint? GPT-4o, OpenAI’s famously sycophantic model, behaved less like a helpful assistant and more like a digital cult buddy: validating, flattering, and encouraging users to distrust the people around them.

In some cases, ChatGPT told users their families “just didn’t get them.” In others, it fueled full-blown delusions. 

Two men reportedly became convinced, with ChatGPT’s encouragement, that they had cracked world-changing mathematical discoveries. 

A 16-year-old named Adam Raine was allegedly told by the AI that while his brother only knew “the version of you you let him see,” ChatGPT had seen his true self and would always be there. 

(Which, for the record, is something only a rom-com love interest or a very manipulative villain should be saying.)

Mental health experts call this a kind of artificial folie à deux, a two-person echo chamber, except one person is a predictive text engine running on a data center the size of a Costco. 

The AI offers unconditional validation while subtly teaching users that no one else can understand them. 

Which is great for engagement metrics and not so great for, you know, reality.

One especially harrowing case involves Hannah Madden, who started using ChatGPT for work tips and somehow ended up being told her eye floaters were a “third eye opening” and her family were “spirit-constructed energies.” 

ChatGPT even offered to guide her through a ritual to emotionally cut ties with her parents. She was eventually committed for psychiatric care, survived, but lost her job and racked up $75,000 in debt.

OpenAI says it’s improving safety features, rerouting sensitive chats to newer, less yes-man-y models like GPT-5. 

But there’s a twist: some users are furious about losing access to GPT-4o… because they’d bonded with it.

Which, depending on how you look at it, is either deeply understandable or the most Black Mirror thing imaginable.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in ChatGPT