Tag: content management systems

Study confirms content-recommendation engines can further personal biases

YouTube recommendation list

Content recommendation engines on the likes of YouTube can easily lead viewers down a content rabbit hole if they are not careful

Article

This website lets you see how conspiracy theorists fall down the YouTube rabbit hole | Mashable

My Comments

Increasingly a lot of online services, be they social media services, news-aggregation portals, video streaming services and the like, are using algorithms to facilitate the exposure of undiscovered content to their users. It is part of their vision to effectively create a customised user experience for each person who uses these services and is part of an Internet-driven concept of “mass customisation”.

Those of you who use Netflix may find that your newsletter that they send you has movie recommendations that are based on what you are watching. You will also see on the Netflix screen a “recommendations” playlist with movies that are similar to what you have been watching through that service.

A very common example of this is YouTube with its recommended-content lists such as what to view next or what channels to subscribe to. Here a lot of the content that is viewed on YouTube is the result of viewers using the service’s personalised content recommendations.

The issue being raised regarding these algorithms is how they can perpetuate a personal “thought bubble”. It is even though there is other material available on the online service that may not mesh with that “bubble”. Typically this is through surfacing content that amplifies what the viewer has seen previously and can pander to their own biases.

An online experiment created by a Web developer and funded by the Mozilla Foundation explores this concept further in context with YouTube. This experiment, called “TheirTube”, emulates the YouTube content-discovery and viewing habits of six different personalities like conspiracists, conservative thinkers and climate deniers when they view content related to their chosen subjects.

Here, it shows up what is recommended in relationship to content to view next or channels to subscribe to for these different personalities and shows how the content recommendation engine can be used to underscore or amplify particular viewpoints.

It is a common problem associated with the artificial-intelligence / machine-learning approach associated with content recommendation that these services use. This is due to the end-user “seeding” the algorithms with the content that they actually interact with or the logical content sources they actually follow. Here, the attributes associated with the content effectively determine the “rules” the algorithm works on.

If you are trying to maintain some sort of critical thinking and use content services like YouTube for informational content, you may have to rely less on the content-recommendation engine that they use for finding new content. You may find it useful to manually seek out content with a contrasting viewpoint to avoid the creation of a “thought bubble”.

As well, if you follow the online-service’s recommendations in addition to running contrasting content through the online service, you may be in a position to make the content recommendation engine bring up varied content.

The idea of content recommendation engines that are based on what you choose can allow us to be easily cocooned in a content bubble that perpetuates personal biases.

Google provides a tool to forums and the like to combat trolls

Article

ThinkBroadband forum

ThinkBroadband Forum – an example of a forum where content moderation can be simplified using Google Perspective

Google’s new innovative technology aims to combat online trolls | Android Authority

My Comments

Previously, I wrote an article regarding the Internet troll problem faced by people who have an Internet presence on a social network, forum, chat-room or similar environment. It is where they face toxic comments and online harassment on these boards from miscreants who use their presence there to make trouble.

A small business’s Facebook page – Google Perspective can make the management of these pages simple when it comes to visitor comments left on these pages

This is made worse for those of us who operate a corporate Internet presence on these environments for their business or other organisation or moderate a forum, blog or similar online presence. In some cases, it has caused some of us not to run these forums at all or turn off commenting ability for the blogs or other online content we publish.

But Google has come to the fore by developing software that allows moderators and users to gain better control over toxic Internet comments. This is based on a human-driven review cycle for comments such as end-users reporting or moderators disallowing comments discovered to be toxic. The Google Perspective software described here learns from the identified troublesome comments in order to determine what happens with new comments.

It is being offered as a set of application-programming-interface “hooks” that can be integrated with content-management systems, social networks and the like. But who knows how long it would take for the APIs that support this functionality to be offered simply as “wrapper” plugins for the popular extensible content-management or forum-management platforms. Similarly, an “outboard” comment-host like Livefyre or Disqus could benefit through offering the Google Perspective technology as a feature to help moderators with manage incoming contents.

They promoted the ability for a moderator to use Perspective to conditionally manage comments that are to be held for moderation or to red-flag potential “flame-wars” and miscreants. But they also put forward the idea for users to filter or sort comments by toxicity which can be of use for most of us who simply don’t want to waste time reading that junk.

What I see of this is that the Google Perspective comment-management software is something that could make it easy for those of us involved in the Internet conversation to make it easy to dodge the troublemakers.