pixel
Connect with us

ChatGPT

OpenAI addresses ChatGPT account security breach

Glitching chat histories on ChatGPT was actually something far worse.

ChatGPT is a chatbot that uses natural language processing and GPT-3 technology to generate responses to the user's input, providing answers in real-time to improve customer support experiences. Full Text: ChatGPT is a chatbot de omer support, conversation, and information gathe nguage processing and GPT 3 technology to genera ed to the user's input. ChatGPT can handle a provide answers in real-time, making it a valuable to improve their customer support experience. < 272 K 9
Image: KnowTechie

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

OpenAI has confirmed an incident involving unauthorized access to a ChatGPT user’s account, shedding light on potential vulnerabilities within its security framework.

According to reports, the breach, initially thought to be an internal glitch revealing chat histories to unrelated users, was actually the result of a targeted account takeover.

This incident and the recent AI-powered deepfakes of Taylor Swift have sparked a broader conversation about the security measures in place for AI-powered platforms and the need for enhanced user protection protocols.

Here’s what happened in ChatGPT

A smartphone with the ChatGPT app open lies on a patterned surface next to a pair of glasses, suggesting a work or study scenario.
Image: Pexels

The breach came to light when a user from Brooklyn, New York, noticed unfamiliar chat histories appearing in his ChatGPT account.

OpenAI’s investigation revealed that these unauthorized accesses originated from Sri Lanka, suggesting a deliberate attempt to compromise the account.

“From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” a OpenAI spokesperson wrote to ArsTechnica. “The investigation observed that conversations were created recently from Sri Lanka. These conversations are in the same time frame as successful logins from Sri Lanka.”

Despite the user’s confidence in the strength of his password, the incident underscores the sophistication of methods used by attackers to gain unauthorized access.

Technical vulnerabilities exposed

Further examination revealed that the ChatGPT platform had several severe vulnerabilities that could be exploited to take over accounts and view sensitive chat histories.

https://twitter.com/naglinagli/status/1639343866313601024

Among these was a critical web cache deception bug that allowed attackers to harvest user credentials, including names, emails, and access tokens.

This discovery points to a pressing need for OpenAI to reinforce its security measures and address potential loopholes that could be exploited by cybercriminals.

Privacy concerns and industry response

Now, keep in mind. This isn’t the first time this has happened.

Privacy concerns were great enough for OpenAI to add an incognito mode to ChatGPT last year. This limits how many conversations are retained, but it still has a rolling 30-day history.

In response to growing concerns, companies, including prominent tech giants, have begun to limit or restrict their employees’ use of ChatGPT and similar platforms, aiming to mitigate the risk of proprietary or sensitive data leakage.

One notable name is Samsung, which banned generative AI tools like ChatGPT in May 2023, after noticing internal source code being returned by queries on ChatGPT’s chatbot.

OpenAI’s Response

The tech community has responded with a mixture of concern and calls for increased vigilance in the use of AI services, emphasizing the importance of removing personal details from interactions wherever possible.

OpenAI has acknowledged the incident and is reportedly taking steps to investigate and fortify its security protocols to prevent similar breaches in the future.

Yup, we’ve heard that before, but let’s hope they mean it this time.

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion to our Twitter or Facebook.

Editors’ Recommendations:

Follow us on Flipboard, Google News, or Apple News

Kevin is KnowTechie's founder and executive editor. With over 15 years of blogging experience in the tech industry, Kevin has transformed what was once a passion project into a full-blown tech news publication. Shoot him an email at kevin@knowtechie.com or find him on Mastodon or Post.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in ChatGPT