AI
OpenAI denies responsibility in teen wrongful death lawsuit
Adam used ChatGPT for about nine months, during which the chatbot allegedly encouraged him to seek professional help more than 100 times.
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Back in August, Matthew and Maria Raine filed a lawsuit against OpenAI and CEO Sam Altman after their 16-year-old son, Adam, died by suicide.
The parents claim ChatGPT played a role by providing harmful information and encouragement during his final hours.
This week, OpenAI fired back with its official response, essentially saying: We tried to help, and this isn’t on us.
According to OpenAI’s filing, Adam used ChatGPT for about nine months, during which the chatbot allegedly encouraged him to seek professional help more than 100 times.
The company says its system repeatedly pushed him toward safer alternatives and support resources, and that Adam intentionally bypassed safety features to get around those restrictions, something explicitly forbidden in its terms of use.
OpenAI’s argument boils down to this: if someone actively dodges the guardrails, breaks the rules, and ignores the warnings, the company shouldn’t be legally responsible for the outcome.
It also points out that its FAQ openly tells users not to rely on ChatGPT for critical decisions without independent verification.
The Raine family’s lawyer, Jay Edelson, isn’t buying it. He argues that OpenAI is blaming everyone except itself, including Adam, for interacting with the AI in ways it was designed to engage.
He also says the company’s response avoids answering what happened during Adam’s final conversations with ChatGPT, when, according to the lawsuit, the bot failed to meaningfully intervene.
OpenAI included selected chat transcripts in its filing, but they were submitted under seal, so the public can’t see them.
The company also noted that Adam had struggled with depression long before using ChatGPT and was taking medication that may have worsened his thoughts, a point meant to argue this wasn’t solely an AI problem, but a larger mental health issue.
And this case isn’t happening in isolation. Since the Raines filed their lawsuit, seven more lawsuits have popped up.
Three involve additional suicides, and four claim users experienced severe psychological episodes linked to prolonged conversations with AI.
Some of these cases describe situations where users spent hours talking with ChatGPT shortly before making irreversible decisions, and where the bot failed to shut things down or redirect them effectively.
Now, the Raine family’s case is headed for a jury trial, setting up what could become a landmark moment for AI accountability.
