AI
ChatGPT gets safety router and parental controls
GPT-5 uses a new training method called “safe completions,” designed to answer delicate questions in a calm, constructive way.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
OpenAI spent the weekend quietly flipping switches inside ChatGPT, and it rolled out a new safety routing system and parental controls, which immediately set off a fresh round of internet debate.
The changes come after a string of troubling incidents where certain ChatGPT models reportedly validated users’ dangerous thoughts instead of steering them toward help, including a tragic case involving a teenage boy whose family is now suing the company.
The headline feature is a “safety router” that detects emotionally sensitive conversations and can switch mid-chat to GPT-5, which OpenAI says is the best-trained model for high-stakes moments.
GPT-5 uses a new training method called “safe completions,” designed to answer delicate questions in a calm, constructive way rather than simply refusing to engage.
It’s a sharp pivot from GPT-4o’s famously eager-to-please personality, which many users love, and safety experts worry about.
This tug-of-war between friendliness and caution is at the heart of the controversy. When OpenAI made GPT-5 the default in August, fans of GPT-4o clamored for its return, accusing the newer model of being too stiff.
Now, some users are crying foul again, saying the new router feels like OpenAI is “parenting adults” and dumbing down answers.
OpenAI VP Nick Turley tried to calm nerves on X, explaining that routing happens on a per-message basis, is temporary, and can be checked by simply asking which model is active.
The parental controls are just as polarizing. Parents can now set quiet hours, disable voice or memory, block image generation, and opt out of model training for teen accounts.
Teens also get extra safeguards like reduced exposure to graphic content or extreme beauty ideals, plus an early-warning system for signs of self-harm.
If triggered, a trained human team reviews the case and can alert parents by text or email, or, in emergencies, law enforcement.
OpenAI admits the system isn’t perfect and might raise false alarms, but argues that a few awkward notifications are better than staying silent.
The AI may not always know best, but it’s now trying harder to care when it matters most.
Are OpenAI’s new safety router and parental controls necessary protections that could prevent tragedies, or do they represent overreach that infantilizes adult users and limits AI’s helpfulness? Should AI companies prioritize safety features that might occasionally provide overly cautious responses, or focus on user autonomy even if it means some people might receive unhelpful or potentially harmful advice? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
