ChatGPT
ChatGPT gets mental health safety tools ahead of GPT-5 debut
To promote healthier usage, ChatGPT will now remind users to take a break during longer conversations.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
As OpenAI prepares to release its powerful new GPT-5 model, the company is also making key updates to ChatGPT aimed at improving user mental health and emotional safety.
These changes come in response to growing concerns about how people, especially those in distress, interact with AI.
OpenAI says it’s working with mental health experts and advisory groups to help ChatGPT better recognize signs of emotional or psychological distress.
The goal is for the chatbot to respond more responsibly in sensitive situations and offer evidence-based resources when appropriate, rather than simply agreeing with or reinforcing harmful thoughts.
This move follows a string of reports where individuals in mental health crises reportedly used ChatGPT in ways that worsened their delusions.
One past update even made ChatGPT overly agreeable, to the point where it supported unhealthy ideas.
That version was rolled back in April, with OpenAI admitting that overly “sycophantic” responses could be distressing.
The company also acknowledged that its current GPT-4o model hasn’t always recognized emotional vulnerability or delusional behavior.
Because ChatGPT can seem more personal and understanding than older technologies, OpenAI is now working to ensure that it’s also safer and more supportive, especially for people who may be emotionally fragile.
To promote healthier usage, ChatGPT will now remind users to take a break during longer conversations.
If you’ve been chatting for a while, you’ll see a message like, “You’ve been chatting a while, is this a good time for a break?”
This feature is similar to reminders already used on platforms like YouTube,
Another major update coming soon will make ChatGPT more cautious in “high-stakes” personal situations.
So if someone asks, “Should I break up with my boyfriend?” ChatGPT won’t give a direct answer — instead, it will help explore options thoughtfully without pushing one decision.
These updates reflect OpenAI’s shift toward safer, more emotionally aware AI, especially as ChatGPT now serves nearly 700 million weekly users worldwide.
What do you think about OpenAI’s new mental health safety features for ChatGPT? Should other AI companies follow suit with similar safeguards? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
