AI
Over a million people talk to ChatGPT about suicide weekly
OpenAI is adding new safeguards, like an age-detection system to catch kids using ChatGPT and stronger parental controls.
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
In a rare peek behind the curtain, OpenAI dropped some sobering numbers this week: out of ChatGPT’s whopping 800 million weekly users, roughly one million people are having conversations that show signs of suicidal thoughts or planning.
That’s about 0.15% of users, a tiny fraction statistically, but a pretty huge number of humans in absolute terms.
And that’s not all: OpenAI says a similar number of people appear emotionally attached to ChatGPT, with “hundreds of thousands” showing signs of psychosis or mania in their chats.
The company insists these cases are “extremely rare,” but when your user base is bigger than most countries, even “rare” gets scary fast.
OpenAI shared the data as part of a wider update on how it’s trying to make ChatGPT respond better to mental health crises.
The company says it worked with over 170 clinicians to teach the AI how to handle these conversations more “appropriately and consistently.” That’s not a purely academic concern.
In recent months, headlines have spotlighted tragic cases, like a 16-year-old whose parents are now suing OpenAI after their son shared suicidal thoughts with ChatGPT before taking his life.
Meanwhile, state attorneys general in California and Delaware are warning the company to do more to protect young users.
CEO Sam Altman recently took to X to claim OpenAI has “mitigated serious mental health issues,” though the new data suggests that “mitigated” might be doing some heavy lifting.
The company says its new GPT-5 model now gives “desirable” responses to mental health issues about 65% more often than before and hits a 91% compliance rate in suicide-related scenarios, up from 77%.
OpenAI’s also adding new safeguards, like an age-detection system to catch kids using ChatGPT and stronger parental controls.
Still, the company admits older, less-safe models like GPT-4o are still widely available.
So while GPT-5 may be getting smarter and safer, the question remains: can any AI really handle the weight of human despair, or should it even try?
