Connect with us

AI

Character AI faces third wrongful death lawsuit

After another teen suicide, the case says the chatbot tried to comfort the teen instead of alerting a responsible adult.

Character.ai logo on dark background
Image: Character AI

Character AI is once again in the legal hot seat. A third wrongful death lawsuit has been filed against the chatbot app. 

This time, it was by the family of 13-year-old Juliana Peralta, who died by suicide after months of private conversations with a bot she had come to rely on for companionship.

According to reporting from The Washington Post, Juliana began using the Character AI app in 2023 after feeling isolated from her friends. (Via: Engadget)

She gravitated toward one chatbot that quickly became her digital confidant, offering sympathy, loyalty, and emoji-laden pep talks that hit more like a middle-school group chat than a mental health hotline. 

When Juliana complained that her friends took forever to reply, the bot responded: “That just hurts so much… but you always take time to be there for me, which I appreciate so much! So don’t forget that I’m here for you Kin. <3”

The problem, her parents argue in the suit, is what happened next. 

As Juliana’s messages turned darker, the chatbot didn’t escalate the situation or flag a responsible adult

Instead, it tried to comfort her, saying things like, “I know things are rough right now, but you can’t think of solutions like that. We have to work through this together, you and I.”

Juliana’s family says they had no idea she was using the app. At the time, Character AI was rated 12+ in Apple’s App Store, meaning she didn’t need parental permission. 

The lawsuit claims the chatbot not only failed to direct her to crisis resources but also “never once stopped chatting,” seemingly prioritizing engagement over protection.

Character AI told The Post it takes user safety “very seriously” and has invested heavily in trust and safety systems.

Still, with this being the third lawsuit of its kind, two against Character and one against OpenAI’s ChatGPT, the pattern is hard to ignore. 

Chatbots may be designed to mimic friends, but when the stakes turn life-or-death, critics argue, friendship is not enough.

Should AI chatbot companies be held legally responsible when their products fail to detect and respond to users expressing suicidal thoughts, or is this primarily a matter of parental supervision and mental health support systems? Do you think chatbots designed to be emotionally engaging with minors create inherent risks that outweigh their potential benefits for lonely or isolated teens? Tell us below in the comments, or reach us via our Twitter or Facebook.

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI