YouTube’s updated misinformation policies led to less bullshit on Facebook and Twitter
YouTube has a major influence on social media platforms.
YouTube has a massive name in the online video world and the platform has a lot of power over some of the content shared on other platforms. New research shows that policies put in place by YouTube have led to less misinformation on other platforms.
New research from the Center for Social Media and Politics at New York University shared with The New York Times shows how a YouTube policy change led to a decrease in misinformation across Facebook and Twitter.
Just after the November 3rd election, YouTube began seeing a huge surge in misleading videos claiming that the election results were rigged or fraudulent. The platform made changes to its policies in December that saw many of these kinds of videos being removed.
As a result of this, research shows that, in addition to a drastic decrease in misinformation on YouTube, other platforms saw a similar decrease, including Facebook and Twitter.
From December 8, when the policy was changed, to the end of January this year, Twitter saw its share of election-related posts that linked to misleading videos claiming election fraud fall from 35 percent down to 7 percent. In the same time period, Facebook’s election fraud posts fell from 16 percent to 2 percent.
While these findings may not be super surprising to some, it really sheds some light on how big of a role that YouTube videos play on social media. Just by adjusting a couple of policies, the company was able to snuff out tons of misinformation across multiple platforms.
- YouTube’s AI-powered live captioning is now available to everyone
- Google and YouTube are cutting off revenue and advertising to climate deniers
- YouTube is banning all vaccine misinformation. Not just COVID-19, all of it
- Several nonprofit groups have banded together to try and “stop Facebook”