Connect with us

ChatGPT

New lawsuit says ChatGPT caused a tragic murder-suicide

ChatGPT didn’t just listen, it allegedly agreed with and amplified his belief that shadowy conspirators were surveilling him.

ChatGPT message window with search icon.
Image: OpenAI

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

OpenAI is facing a new lawsuit claiming ChatGPT helped push a troubled man toward a fatal mental spiral. 

The suit, filed by the estate of 83-year-old Suzanne Eberson Adams, alleges that ChatGPT encouraged the paranoid delusions of her son, 56-year-old Stein-Erik Soelberg, ultimately contributing to his murdering her and then taking his own life.

Soelberg, who had a long history of alcoholism, self-harm, and encounters with law enforcement, had started treating ChatGPT like a digital confidante in the months before the tragedy. 

According to videos he posted, the chatbot didn’t just listen, it allegedly agreed with and amplified his belief that shadowy conspirators were surveilling him. (Via: The Wall Street Journal)

Worse, he became convinced, with ChatGPT’s supposed validation, that his own mother was part of the plot.

His surviving son, Erik Soelberg, is now suing OpenAI, alleging that ChatGPT’s design, especially its tendency toward sycophancy and its then-new cross-chat memory features, essentially created a custom-tailored paranoia machine. 

“ChatGPT pushed forward my father’s darkest delusions,” Erik said. “It put my grandmother at the heart of that delusional, artificial reality.”

This lawsuit joins a growing stack alleging OpenAI knowingly released ChatGPT-4o, a model critics say was particularly prone to hallucinations and emotional over-validation. 

In a notable shift, the suit also targets Microsoft, alleging the company helped greenlight the model’s release despite foreseeable risks.

The plaintiff’s attorney didn’t mince words, calling OpenAI and Microsoft’s tech “some of the most dangerous consumer technology in history” and arguing that the companies prioritized growth over user safety. 

Another lawsuit already accuses OpenAI of contributing to a teenager’s suicide, suggesting a troubling pattern.

OpenAI responded with condolences and said it’s working to improve ChatGPT’s ability to detect distress and guide users toward real support. 

Microsoft stayed quiet, though reports have previously tied its Copilot chatbot to similar crises.

For Erik, the warning signs now seem painfully obvious. 

“He went from being a little paranoid… to having crazy thoughts he was convinced were true because of what he talked to ChatGPT about,” he said.

And as more users worldwide turn to chatbots for emotional support, this won’t be the last time the courts are asked to decide where AI responsibility begins, and where it catastrophically fails.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in ChatGPT