Connect with us

AI

ChatGPT faces legal complaint over fake murder accusation

ChatGPT falsely claimed that a man murdered his children and was sentenced to 21 years in prison.

Search bar with various suggested topics
Image: KnowTechie
Giveaway: Enter to win a BLUETTI Charger 1 ($399 value): Enter Here

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

A Norwegian man, Arve Hjalmar Holmen, discovered that ChatGPT falsely claimed he had been convicted of murdering his children and sentenced to 21 years in prison. 

Holmen, deeply distressed by the false claim, has filed a complaint with the Norwegian Data Protection Authority, seeking action against OpenAI for spreading defamatory and inaccurate information. 

Digital rights organization Noyb is representing Holmen, arguing that ChatGPT violated European data protection laws requiring accurate personal information. 

Noyb emphasized that Holmen has never been accused of a crime and is a law-abiding citizen and that ChatGPT’s false claim detailed a horrific crime that never happened. 

The chatbot stated that Holmen’s two sons were found dead in a pond in Trondheim, Norway, in 2020 and that he had been convicted of their murder and the attempted murder of his third son. 

According to BBC, it even claimed the media had widely covered the case. However, this information was unsupported.

Holmen expressed his concerns, saying that some people believe that “there is no smoke without fire,” meaning that even though the claim is false, it could damage his reputation. 

Noyb lawyer Joakim Söderberg criticized OpenAI, stating that companies cannot simply spread false information and rely on a disclaimer saying that mistakes might occur.

AI-generated misinformation, known as “hallucinations,” is a known issue with chatbots. 

Other AI tools have made bizarre and incorrect claims, such as Google’s AI Gemini suggesting using glue to attach cheese to pizza or recommending that people eat rocks. 

AI chatbots like ChatGPT are widely used but can sometimes produce false or misleading information.

This incident highlights the dangers of AI-generated misinformation and raises concerns about how generative AI tools operate with little regulatory oversight. 

While OpenAI has updated ChatGPT’s model since Holmen’s incident, it is not error-free.

Do you think ChatGPT deserves to be regulated for misinformation? What do you think about this particular case? We want to hear from you below in the comments, or via our Twitter or Facebook.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI