Instagram’s “benefit without responsibility” corporate culture is a real problem
The content moderation crisis facing platforms like Facebook, Instagram, and YouTube is a monster of their own doing, but good luck getting any of them to admit it.
Last week, the National Society for the Prevention of Cruelty to Children (NSPCC) published a report declaring (or more accurately, condemning) Instagram as “the number one platform for child abusers.” Citing data pulled from 39 police forces under Freedom of Information laws, the British children’s charity claimed that the social media platform had been used 428 times by child groomers between April and September 2018 in the UK alone, a 239% increase from the previous year.
The report came less than a month after YouTuber Matt Watson released a profoundly disturbing video exposing a “soft-core pedophilia ring” on YouTube, wherein it was discovered that the streaming service was not only allowing pedophiles to share social media contacts, links to actual child porn, and swap unlisted videos in the comments sections of many seemingly innocent children’s videos, but thanks to a loophole (or “glitch” as Watson put it) in its algorithm, was actually facilitating and even monetizing these incredibly sketchy videos/accounts through ads.
YouTube was quiet – until advertisers started pulling out
Of course, it wasn’t until major advertisers like AT&T and Epic Games started pulling their ads that YouTube decided to jump into action, announcing that it would no longer allow comments on videos featuring young minors and sending YouTube chief executive officer Susan Wojcicki on the Jack Dorsey Apology Tour.
Appearing at a tech summit in San Francisco over the weekend (featured in the video below at the 1:33:30 mark), Wojcicki attempted to answer for the service’s recent failings during a panel led by New York Times columnist Kara Swisher. When asked why YouTube would have to be “made aware” of a problem of this magnitude, Wojcicki offered the kind of boilerplate, corporate-approved response that we’ve come to expect out of Silicon Valley.
“We became aware of some comments, the videos were okay, but the comments with those videos … as soon as we were made aware of them (the number was in the low thousands) we removed comments off of tens of millions of videos,” said Wojcicki.
“We’re a platform that’s really balancing between freedom of speech and managing with our community guidelines.”
Shortly thereafter came the standard “we’ll continue to monitor the situation closely” from a YouTube spokesperson, and the whole thing just makes you feel sort of sick. Sick and numb.
Again, this isn’t the first time that YouTube has been caught with its blinders on in regards to a horrific abuse of its policies or guidelines. It’s not even the first time it has been caught unintentionally promoting pedophilia. In 2017, The Times of London reported that a number of major brands had been advertising on YouTube videos “showing scantily clad children that have attracted comments from hundreds of [pedophiles].” That was the same year, mind you, that dozens of businesses pulled their ads from YouTube after discovering that they were being linked to hate-speech videos.
Likewise, Business Insider recently discovered that Instagram’s new TV service IGTV was recommending sexually suggestive videos of children. Instagram removed the videos and apologized, and if I need to tell you what the apology looked like then you clearly aren’t paying attention.
The Instagram Problem
It’s not that the issues facing platforms like YouTube or Instagram are facing can be solved in the blink of an eye. Over 300 hours of video are uploaded to the former every day. The latter has over a billion monthly users. There aren’t enough content moderators on the planet to effectively monitor the constant deluge of content flooding each platform, as evidenced by the horrific workplace conditions, pay, and performance goals that content moderators are already forced to deal with.
The issue is that, time and time again, the platforms responsible for this spread of horrific and/or false information don’t ever seem to learn from it, and all but refuse to directly address it on a wider scale. In the case of YouTube, it’s the fact that the platform would delete the comments flagged as inappropriate in the videos, but not the accounts associated with those comments. With Instagram, it’s the notion that children aged 12 to 15 are most likely to be targeted by these “groomers” (with some victims aged as young as five) despite the platform’s own rules stating that users must be at least 13 years old.
It’s companies claiming that the volume of issues that they’re tasked with moderating is overwhelming, despite the fact that those very companies are responsible for creating that volume – and continuously increasing it – in order to protect their bottom line. It’s the darkest possible result of “move fast and break things,” which more and more companies like YouTube and Instagram love to promote in ideology but flat out refuse to defend in consequence.
Social media firms "must tackle grooming", says @NSPCC. "Police figures suggested Instagram, Facebook and Snapchat had been used in 70% of cases of sexual communication with a child since it became an offence in April 2017"
Read via @BBCNews: https://t.co/u4nKhS7Of2#safeguarding pic.twitter.com/JDSeffDkXm
— Ann Marie Christian (@Annmariechild) March 1, 2019
In the weeks since he released his video, Watson has been called out and outright attacked by his fellow YouTubers for everything from his own failed attempts at stardom to his “problematic” video history. The common thread tying his naysayers’ arguments together is Watson’s seemingly erroneous assumption that getting advertisers to pull their support from YouTube en masse – creating another “adpocalypse” like the one YouTube experienced in 2017 – from is the only way to promote change.
There’s truth to their outrage, as there’s only so much that content creators can do to control what is said or promoted in the comment threads of their videos, which extends to the advertisers posting on those videos. Targeting advertisers only punishes those who are already being screwed by YouTube the hardest.
But regardless of Watson’s motives, the fact remains that YouTube didn’t take drastic measures until they began to negatively affect it monetarily. Instagram hasn’t taken significant action outside of deleting a few comments because a seedy subculture of rampant sexual abuse isn’t losing or gaining it enough money to warrant an intervention. The idea that billion dollar corporations choose money over morality 9 out of 10 times isn’t mindblowing by any means, but it’s still impressive to see one proudly displaying its true intentions while preaching about the sanctity of “safety” and “community guidelines.”
As Swisher put it in her response to Wojcicki, “You have all reaped the benefit, but not the responsibility.” Whenever controversy strikes the tech world in particular, you’ll hear a lot of talk about the “larger conversation” that needs to be had before a firm decision can be made in either direction. It’s a classic non-answer and one of the best ways a corporation can say “we have no idea what to do” in a voice that assures you of just the opposite. But for those looking for an answer today instead of a promise tomorrow, it seems the only way that conversation even begins is through the wallet.
What do you think? What should companies like Instagram and YouTube do about its growing list of issues? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.
- Your two-factor phone number for Facebook is completely searchable
- Twitter is working on a feature that lets you moderate your replies
- JetBlue contest tests the mighty addiction of your Instagram habit
- More malware in the Google Play Store found to be draining batteries and racking up data charges
- YouTube is finally changing its strike system to be more transparent