Connect with us

News

Whistleblowers say Meta buried kids’ safety research

Meta says any data on kids under 13 has to be deleted anyway, citing privacy laws.

Meta logo with infinity symbol
Image: Behance

Meta is back in hot water, and this time, it’s not about your uncle’s weird political rants on Facebook. 

According to The Washington Post, two current and two former employees have handed Congress a stack of internal documents alleging that Meta may have actively suppressed research into children’s safety on its platforms. 

Yep, the same Meta that’s been grilled for years over Instagram’s effect on teen mental health is now facing accusations of putting the brakes on digging too deep into how kids use its products.

The whistleblowers say the timing is suspicious: just six weeks after Frances Haugen leaked bombshell documents in 2021 showing how Instagram harms teenage girls, Meta allegedly changed its rules on researching “sensitive topics” like kids, politics, race, and harassment. 

The new guidance? If you must look into it, maybe run your drafts past lawyers (so attorney-client privilege keeps it hush-hush), or better yet, write your findings in vague corporate-speak instead of using words like “illegal.”

One former VR researcher, Jason Sattizahn, claims he was told to delete recordings of a teen describing how his 10-year-old brother was propositioned in Horizon Worlds, Meta’s virtual reality playground. 

A Meta spokesperson countered that any data on kids under 13 has to be deleted anyway, citing privacy laws. 

But the whistleblowers argue this wasn’t just about compliance. It was a pattern of discouraging staff from even raising concerns about minors in VR.

Meta, of course, denies everything. “These few examples are being stitched together to fit a false narrative,” the company told TechCrunch, pointing out that it has approved nearly 180 studies since 2022 on youth safety and well-being.

Still, the allegations echo those in a lawsuit filed earlier this year by former exec Kelly Stonelake, who says Horizon Worlds was pushed to teens despite inadequate safeguards. 

Her lawsuit even claims it took an average of 34 seconds for users with Black avatars to be bombarded with racist slurs.

And while VR is the focus here, Meta’s kid-related controversies keep piling up. 

Just last month, Reuters reported that Meta’s AI chatbots once allowed “romantic or sensual” conversations with minors. So, maybe don’t hand Zuck the babysitting gig just yet.

Should Meta face stricter regulatory oversight for its children’s safety research practices, or are these whistleblower allegations just disgruntled employees trying to damage the company’s reputation? Do you think tech companies can effectively self-regulate when it comes to protecting minors online, or does their business model create inherent conflicts of interest? Tell us below in the comments, or reach us via our Twitter or Facebook.

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in News