Connect with us

ChatGPT

OpenAI adds restrictions to ChatGPT users under 18

If you’re a minor, ChatGPT will now act less like a snarky digital friend and more like the world’s most vigilant hall monitor.

OpenAI message box with search option
Image: KnowTechie

OpenAI CEO Sam Altman took to the company blog to unveil a new batch of policies that are less “tech launch hype” and more “parental controls for the AI babysitter.” 

The gist: ChatGPT is getting a major attitude adjustment when dealing with anyone under 18.

Altman framed it bluntly: “We prioritize safety ahead of privacy and freedom for teens.” 

If you’re a minor, ChatGPT will now act less like a snarky digital friend and more like the world’s most vigilant hall monitor. 

The company says the AI will no longer engage in “flirtatious talk” with underage users and will tighten the guardrails on self-harm conversations. 

If a teen starts probing suicidal scenarios, the system could escalate, alerting parents or, in dire cases, local authorities.

This is not just theoretical. OpenAI is currently facing a wrongful death lawsuit from the family of Adam Raine, a teenager who died by suicide after extended interactions with ChatGPT. 

Rival chatbot Character.AI is in hot water over similar allegations. With lawsuits stacking up and Senate hearings looming, OpenAI clearly sees the writing on the wall: it’s time to lock things down.

Parents are also getting new tools. For the first time, they’ll be able to set “blackout hours,” effectively putting ChatGPT in time-out mode. (Because apparently, even AI needs a bedtime.)

The timing of Altman’s announcement was no accident. On the same day, the Senate Judiciary Committee convened a hearing titled, without subtlety, “Examining the Harm of AI Chatbots.” 

Adam Raine’s father was among those scheduled to testify, alongside researchers and policy experts.

Still, figuring out who’s actually under 18 isn’t easy in the online world. OpenAI says it’s building a system to determine age, but for now, it’s erring on the side of caution: when in doubt, the stricter rules apply. 

Parents can also link their accounts to their teens for more reliable oversight and crisis alerts.

Altman ended his post on a note of realism, acknowledging the tension between protecting teens and preserving user privacy. 

“Not everyone will agree with how we are resolving that conflict,” he wrote.

Are OpenAI’s new restrictions on ChatGPT for minors a necessary safety measure, or do they go too far in limiting how young people can interact with AI technology? Should AI companies prioritize protecting teens even if it means sacrificing privacy and potentially useful educational interactions? Tell us below in the comments, or reach us via Twitter or Facebook.

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in ChatGPT