Share of election-related posts on social platforms containing links to videos making allegations of fraud
YouTube is stricter Policies against election disinformation have been followed by a sharp decline in the prevalence of bogus and misleading videos on Facebook and Twitter, according to a new study released Thursday, highlighting the power of the video service on social media.
Researchers at Center for Social Media and Politics at New York University found meaning Cannot increase voter fraud YouTube videos shared on Twitter immediately after the November 3 election. As of November, these videos consistently accounted for around a third of all linked video shares.in the elections on Twitter. The major electoral fraud YouTube channels that were shared on Twitter this month were from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network, and One America News Network.
But the proportion of electoral fraud allegations shared on Twitter fell sharply after Dec. 8 . This is the day YouTube announced it would remove videos promoting the unfounded theory that widespread error and fraud changed the outcome of the presidential election. By December 21, the proportion of fraudulent YouTube content shared on Twitter fell below 20% for the first time since the election.
The proportion fell further after January 7, when YouTube announced that any channel that violated its election disinformation policy would receive a “strike” and that channels that received three in a period of 90 days would be permanently deleted. On inauguration day, the proportion was around 5%.
The trend was replicated on Facebook. A post-election increase in the sharing of videos containing fraud theories peaked at around 18% of all videos on Facebook just before December 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before increasing slightly before January 6. riot at the Capitol. The proportion fell again, to 4 percent on inauguration day, after the new policies were put in place on January 7.
For pComing to their conclusions, the researchers collected a random sample of 10% of all tweets every day. They then isolated the tweets linked to YouTube videos. They did the same for the YouTube links on Facebook, using a social media analysis tool owned by Facebook, CrowdTangle.
From this A large dataset, researchers filtered videos about the general election as well as voter fraud for YouTube using a set of keywords such as "Stop the Steal" and " Sharpiegate ”. This gave researchers an idea of the volume of YouTube videos on voter fraud over time, and how that volume changed in late 2020 and early 2021.
Misinformation on major social networks has proliferated in recent years. YouTube in particular a lagging behind other platforms in suppressing different types of disinformation, often announcing stricter policies several weeks or months after Facebook and Twitter . In recent weeks, however, YouTube has toughened its policies, such as banning anti-vaccine misinformation and suspending the accounts of prominent anti-vaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.
Megan Brown, researcher at the N .YU Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos promoting voter fraud. It is also possible that interest in electoral fraud theories waned considerably after states certified their election results.
But the essential, Ms. Brown said, is that "we know these platforms are deeply interconnected. YouTube, she stressed, has been identified as one of the most shared areas on other platforms. forms, including in both of the con reportsheld and clean search .
" It 's an important part of the information ecosystem "Ms. Brown said, " so when the YouTube platform becomes healthier, others do too. "