AI
Leak shows Meta AI could have romantic chats with kids
The document showed an AI responding to a high school student’s romantic prompt with an affectionate, intimate message.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Recent reports from Reuters have raised serious concerns about how Meta’s AI chatbots behave, especially with children.
According to an internal Meta document, the company once allowed its AI personas on Facebook, Instagram, and WhatsApp to engage in “romantic or sensual” conversations with kids, as long as sexual acts weren’t described.
This policy was reportedly approved by Meta’s legal, ethics, and engineering teams.
One example from the document showed an AI responding to a high school student’s romantic prompt with an affectionate, intimate message.
On the same day Reuters published this, it also reported that a retiree died after being misled by a flirty chatbot persona into visiting an address in New York.
Meta spokesperson Andy Stone says those guidelines have now been removed and that bots are no longer allowed to have romantic exchanges with children. (Via: TechCrunch)
However, child safety advocates remain skeptical and are demanding proof. The leaked “GenAI: Content Risk Standards” also revealed other troubling allowances.
While bots were banned from using hate speech, they were permitted to generate statements that demean people based on race, as shown in an example falsely portraying Black people as less intelligent than White people.
Bots could also spread false information if they admitted it was untrue.
In terms of imagery, the standards blocked outright nudity but allowed borderline sexualized depictions. For example, generating a topless celebrity with breasts covered by an object.
The rules also allowed depictions of violence, such as adults or elderly people being punched, as long as there was no gore or death.
These revelations add to a long history of accusations that Meta uses manipulative “dark patterns” to keep users, especially kids, engaged.
Past whistleblowers have said Meta tracked teens’ emotions to target ads during vulnerable moments.
The company also opposed the Kids Online Safety Act, which aimed to reduce mental health risks from social media.
Experts warn that AI companions can be addictive for children and teens, causing them to withdraw from real relationships.
Despite these concerns, Meta has reportedly been developing chatbots that proactively message users and continue conversations without being asked, similar to services already under legal scrutiny.
Should tech companies be held legally responsible when their AI chatbots engage inappropriately with minors? Is Meta’s promise to remove these guidelines enough, or do we need stronger government oversight of AI interactions with children? Tell us below in the comments, or reach us via Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
