AI
Meta fixes bug with AI prompt privacy
When people uses Meta AI to write prompts, Meta’s system assigned each session a unique number.

Deprecated: mb_convert_encoding(): Handling HTML entities via mbstring is deprecated; use htmlspecialchars, htmlentities, or mb_encode_numericentity/mb_decode_numericentity instead in /www/knowtechie_840/public/wp-content/plugins/wpcode-premium/includes/class-wpcode-snippet-execute.php(419) : eval()'d code on line 13
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Meta recently fixed a serious security bug that made it possible for users to accidentally access private chats and AI responses from Meta AI belonging to other people.
The issue was discovered by Sandeep Hodkasia, a cybersecurity expert and founder of a security firm called AppSecure. On December 26, 2024, he reported the problem directly to Meta. (Via: TechCrunch)
In return for responsibly disclosing the bug, Meta paid him $10,000 as part of its bug bounty program, a reward system for people who help find and report security issues.
Here’s how the bug worked: When people used Meta AI to write prompts (questions or commands) and get answers or images, Meta’s system assigned each session a unique number.
Hodkasia found that by looking at his browser’s background data while editing one of his AI prompts, he could change this number.
When he did that, he received another user’s prompt and the AI’s response to it.
Essentially, the system wasn’t checking to make sure he had permission to view someone else’s data.
Even worse, these numbers were easy to guess, which meant that with automated tools, someone could have collected a lot of other users’ prompts and AI replies without their knowledge.
Meta responded by fixing the bug on January 24, 2025.
According to Hodkasia and Meta, there is no evidence that anyone used this bug in a harmful way before it was patched.
A Meta spokesperson confirmed the fix and the reward and emphasized that the company takes security seriously.
This discovery comes at a time when tech companies are rapidly rolling out AI features, sometimes before fully resolving privacy and safety issues.
In fact, Meta AI’s app had already faced some early troubles, including situations where users
This incident highlights how important it is for companies to thoroughly test AI tools to make sure users’ personal information stays private and secure.
Do you think tech companies are moving too fast with AI features without properly testing for privacy issues? Or are bug bounty programs and responsible disclosure enough to catch these problems? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
