From the horse’s mouth
YouTube’s part in controlling the spread of repeated disinformation has been found to be very limited in some ways.
This was focused on managing accounts and channels (collections of YouTube videos submitted by a YouTube account holder and curated by that holder) in a robust manner like implementing three-strikes policies when repeated disinformation occurs. It extended to managing the content recommendation engine in order to effectively “bury” that kind of content from end-users’ default views.
But new other issues have come up in relation to this topic. One of these is to continually train the artificial-intelligence / machine-learning subsystems associated with how YouTube operates with new data that represents newer situations. This includes the use of different keywords and different languages.
Another approach that will fly in the face of disinformation purveyors is to point end-users to authoritative resources relating to the topic at hand. This will typically manifest in lists of hyperlinks to text and video resources from sources of respect when there is a video or channel that has questionable material.
But a new topic or new angle on an existing topic can yield a data-void where there is scant or no information on the topic from respectable resources. This can happen when there is a fast-moving news event and is fed by the 24-hour news cycle.
Another issue is where someone creates a hyperlink to or embeds a YouTube video in their online presence. This is a common way to put YouTube video content “on the map” and can cause a video to go viral by acquiring many views. In some cases like “communications-first” messaging platforms such as SMS/MMS or instant-messaging, a preview image of the video will appear next to a message that has a link to that video.
Initially YouTube looked at the idea of preventing a questionable resource from being shared through the platform’s user interface. But questions were raised about this including limiting a viewer’s freedoms regarding taking the content further.
The issue that wasn’t even raised is the fact that the video can be shared without going via YouTube’s user interface. This can be through other means like copying the URL in the address bar if viewing on a regular computer or invoking the “share” intent on modern desktop and mobile operating systems to facilitate taking it further. In some operating systems, that can extend to printing out material or “throwing” image or video material to the large screen TV using a platform like Apple TV or Chromecast. Add to this the fact that a user will want to share the video with others as part of academic research or news report.
Another approach YouTube is looking at is based on an age-old approach implemented by responsible TV broadcasters or by YouTube with violent age-restricted or other questionable content. That is to show a warning screen, sometimes accompanied with an audio announcement, before the questionable content plays. Most video-on-demand services will implement an interactive approach at least in their “lean-forward” user interfaces where the viewer has to assent to the warning before they see any of that content.
In this case, YouTube would run a warning screen regarding the existence of disinformation in the video content before the content plays. Such an approach would make us aware of the situation and act as a “speed bump” against continual consumption of that content or following through on hyperlinks to such content.
Another issue YouTube is working on is keeping its anti-disinformation efforts culturally relevant. This scopes in various nations’ historical and political contexts, whether a news or information source is an authoritative independent source or simply a propaganda machine, fact-checking requirements, linguistic issues amongst other things. The historical and political issue could include conflicts that had peppered the nation’s or culture’s history or how the nation changed governments.
Having support for relevance to various different cultures provides YouTube’s anti-disinformation effort with some “look-ahead” sense when handling further fake-news campaigns. It also encompasses recognising where a disinformation campaign is being “shaped” to a particular geopolitical area with that area’s history being weaved in to the messaging.
But whatever YouTube is doing may have limited effect if the purveyors of this kind of nonsense use other services to host this video content. This can manifest in alternative “free-speech” video hosting services like BitChute, DTube or PeerTube. Or it can be the content creator hosting the video content on their own Website, something that becomes more feasible as the kind of computing power needed for video hosting at scale becomes cheaper.
What is being raised is YouTube using their own resources to limit the spread of disinformation that is hosted on their own servers rather than looking at this issue holistically. But they are looking at issues like the ever-evolving message of disinformation that adapts to particular cultures along with using warning screens before such videos play.
This is compared to third-party-gatekeeper approaches like NewsGuard (HomeNetworking01.info coverage) where an independent third party scrutinises news content and sites then puts their results in a database. Here various forms of logic can work from this database to deny advertising to a site or cause a warning flag to be shown when users interact with that site.
But by realising that YouTube is being used as a host for fake news and disinformation videos, they are taking further action on this issue. This is even though Google will end up playing cat-and-mouse when it comes to disinformation campaigns.