YouTube has recently been taking action regarding the disemmination of disinformation via its video-sharing platform and this has affected its distribution across other parts of the Social Web according to results published from a recent study.
The Engadget article shows how the report from the study analyses what happened when certain YouTube policies came in to play and their effect on disinformation regarding the 2020 US Presidential Election and its results. All dates referenced to in this article are local to North America.
On December 9, YouTube enacted a policy to remove videos that claim existence of widespread errors and fraud that led to the change in direction for the US Presidential Election’s results. This yielded a siignificant drop in shares of this kind of disinformation via Twitter.
Similarly, after the January 6 Capitol Hill Insurrection, YouTube handed out “strikes” against online-content channels on its platform for sharing misinformation about the election results. That means that the channel cannot upload new content for a period of time or, if it has accrued three strikes in 90 days, it is removed from YouTube. It is very similar to the penalty points or demerit points that is part of most jurisdictions’ traffic-law systems.
This policy also yielded a similarly sharp drop in sharing of such content across the Twitter platform. A similar pattern was also noticed during Inauguration Day when President Joe Biden was sworn in to office.
This same report observes similar metrics across the Facebook platforms using the analytics tools that Facebook provides like CrowdTangle.
Why is this so?
YouTube is seen as the “go-to” online platform for hosting and viewing user-generated video content. This is due to it being free of charge for hosting or casual viewing of such content, while just about all mobile and set-top / connected-TV platforms either have platform-native apps or inherent support for this platform.
As well, it is relatively easy to share YouTube-hosted video content on other online platforms thanks to logic that simplifies this process for the common social-media sites like Facebook or Twitter. Similarly if a video is posted to YouTube in a public context, Google makes it easier to have it appear on its well-known search engine where its brand ends up as a generic trademark for searching for online information using a search engine.
If Google is maintaining robust content control on the YouTube platform, there is a reduced quantity of shareable fake news and disinformation there. As well, most people who use YouTube for video content are more likely to use the Facebook Group’s platforms or Twitter for social-media purposes.
As well, most bloggers and small-time Website operators are likely to share or embed video resources that are hosted on YouTube or Vimeo if they want to cite a video resource. These kind of links also expose the content on the Google, Bing and other search engines due to them identifying the number of sites linking to an online resource and “bumping” them up higher in the search results.
What to be aware of
An issue to be aware of in the context of fake news and disinformation is the rise of alternative platforms. These would include alternative video-hosting platforms like BitChute or PeerTube; alternative social networks like Parler or Gab; or “communications-first” platforms like Signal and Telegram with their “virtual-contact” / “broadcast-channel” functionality.
Then users who follow this kind of disinformation will see the popular social media and video sharing platforms in the same light as established media outlets. Here, they will be condescending of such platforms such as referring them to as “television”. This is more due to this medium being able to allow established broadcasters to easily maintain particular content standards.
What has been shown here is that if at least one popularly-used content platform demonstrates robust content management against disinformation, it has a ripple effect across the rest of the Social Web.