How can social media keep itself socially sane?

BroadcastFacebook login page

Four Corners (ABC Australia) – Inside Facebook

iView – Click to view

Transcript

My Comments

I had just watched the Four Corners “Inside Facebook” episode on ABC TV Australia which touched on the issues and impact that Facebook was having concerning content that is made available on that platform. It was in relationship to recent questions concerning the Silicon Valley social-media and content-aggregation giants and what is their responsibility regarding content made available by their users.

I also saw the concepts that were raised in this episode coming to the fore over the past few weeks with the InfoWars conspiracy-theory site saga that was boiling over in the USA. There, concern was being raised about the vitriol that the InfoWars site was posting up especially in relationship to recent school shootings in that country. At the current time, podcast-content directories like Spotify and Apple iTunes were pulling podcasts generated by that site while

The telecast highlighted how the content moderation staff contracted by Facebook were handling questionable content like self-harm, bullying and hate speech.

For most of the time, Facebook took a content-moderation approach where the bare minimum action was required to deal with questionable content. This was because if they took a heavy-handed approach to censoring content that appeared on the platform, end-users would be drifting away from it. But recent scandals and issues like the Cambridge Analytica scandal and the allegations regarding fake news have been bringing Facebook on edge regarding this topic.

Drawing attention to and handling questionable content

At the moment, Facebook are outsourcing most of the content-moderation work to outside agencies and have been very secretive about how this is done. But the content-moderation workflow is achieved on a reactive basis in response to other Facebook users using the “report” function in the user-interface to draw their attention to questionable content.

This is very different to managing a small blog or forum which is something one person or a small number of people could do thanks to the small amount of traffic that these small Web presences could manage. Here, Facebook is having to engage these content-moderation agencies to be able to work at the large scale that they are working at.

The ability to report questionable content, especially abusive content, is compounded by a weak user-experience that is offered for reporting this kind of content. It is more so where Facebook is used on a user interface that is less than the full Web-based user experience such as some native mobile-platform apps.

This is because, in most democratic countries, social media unlike traditional broadcast media is not subject to government oversight and regulation. Nor is it subject to oversight by “press councils” like what would happen with traditional print media.

Handling content

When a moderator is faced with content that is identified as having graphic violence, they have the option to ignore the content – leave it as is on the platform, delete the content – remove it from the platform, or mark as disturbing – the content is subject to restrictions regarding who can see the content and how it is presented including a warning notice that requires the user to click on the notice before the content is shown. As well, they can notify the publisher who put up the content about the content and the action that has been done with it. In some cases, the content being “marked as disturbing” may be a method used to raise common awareness about the situation being portrayed in the content.

They also touched on dealing with visual content depicting child abuse. One of the factors raised is that the the more views that content depicting abuse multiplies the abuse factor against the victim of that incident.

As well, child-abuse content isn’t readily reported to law-enforcement authorities unless it is streamed live using Facebook’s live-video streaming function. This is because the video clip could be put up by someone at a prior time and on-shared by someone else or it could be a link to content already hosted somewhere else online. But Facebook and their content-moderating agencies engages child-safety experts as part of their moderating team to determine whether it should be reported to law enforcement (and which jurisdiction should handle it).

When facing content that depicts suicide, self-harm or similar situations, the moderating agencies treat these as high-priority situations. Here, if the content promotes this kind of self-destructive behaviour, it is deleted. On the other hand, other material is flagged as to show a “checkpoint” on the publisher’s Facebook user interface. This is where the user is invited to take advantage of mental-health resources local to them and are particular to their situation.

But it is a situation where the desperate Facebook user is posting this kind of content as a personal “cry for help” which isn’t healthy. Typically it is a way to let their social circle i.e. their family and friends know of their personal distress.

Another issue that has also been raised is the existence of underage accounts where children under 13 are operating a Facebook presence by lying about their age, But these accounts are only dealt with if a Facebook user draws attention to the existence of that account.

An advertising–driven platform

What was highlighted in the Four Corners telecast was that Facebook, like the other Silicon Valley social-media giants make most of their money out of on-site advertising. Here, the more engagement that end-users have with these social-media platforms, the more the advertising appears on the pages including the appearance of new ads which leads to more money made by the social media giant.

This is why some of the questionable content still exists on Facebook and similar platforms so as to increase engagement with these platforms. It is although most of us who use these platforms aren’t likely to actively seek this kind of content.

But this show hadn’t even touched on the concept of “brand safety” which is being raised in the advertising industry. This is the issue of where a brand’s image is likely to appear next to controversial content which could be seen as damaging to the brand’s reputation, and is a concept highly treasured by most consumer-facing brands maintaining the “friendly to family and business” image.

A very challenging task

Moderating staff will also find themselves in very mentally-challenging situations while they do this job because in a lot of cases, this kind of disturbing content can effectively play itself over and over again in their minds.

The hate speech quandary

The most contentious issue that Facebook, like the rest of the Social Web, is facing is hate speech. But what qualifies as hate speech and how obvious does it have to be before it has to be acted on? This broadcast drew attention initially to an Internet meme questioning “one’s (white) daughter falling in love with a black person” but doesn’t underscore an act of hatred. The factors that may be used as qualifiers may be the minority group, the role they are having in the accusation, the context of the message, along with the kind of pejorative terms used.

They are also underscoring the provision of a platform to host legitimate political debate. But Facebook can delete resources if a successful criminal action was taken against the publisher.

Facebook has a “shielded” content policy for highly-popular political pages, which is something similarly afforded to respected newspapers and government organisations; and such pages could be treated as if they are a “sacred cow”. Here, if there is an issue raised about the content, the complaint is taken to certain full-time content moderators employed directly by Facebook to determine what action should be taken.

A question that was raised in the context of hate speech was the successful criminal prosecution of alt-right activist Tommy Robinson for sub judice contempt of court in Leeds, UK. Here, he had used Facebook to make a live broadcast about a criminal trial in progress as part of his far-right agenda. But Twitter had taken down the offending content while Facebook didn’t act on the material. From further personal research on extant media coverage, he had committed a similar contempt-of-court offence in Canterbury, UK, thus underscoring a similar modus operandi.

A core comment that was raised about Facebook and the Social Web is that the more open the platform, the more likely one is to see inappropriate unpleasant socially-undesirable content on that platform.

But Facebook have been running a public-relations campaign regarding cleaning up its act in relation to the quality of content that exists on the platform. This is in response to the many inquiries it has been facing from governments regarding fake news, political interference, hate speech and other questionable content and practices.

Although Facebook is the common social-media platform in use, the issues draw out regarding the posting of inappropriate content also affect other social-media platforms and, to some extent, other open freely-accessible publishing platforms like YouTube. There is also the fact that these platforms can be used to link to content already hosted on other Websites like those facilitated by cheap or free Web-hosting services.

There may be some depression, suicide and related issues that I have covered in this article that may concern you or someone else using Facebook. Here are some numbers for relevant organisations in your area who may help you or the other person with these issues.

Australia

Lifeline

Phone: 13 11 14
http://lifeline.org.au

Beyond Blue

Phone: 1300 22 46 36
http://beyondblue.org.au

New Zealand

Lifeline

Phone: 0800 543 354
http://lifeline.org.nz

Depression Helpline

Phone: 0800 111 757
https://depression.org.nz/

United Kingdom

Samaritans

Phone: 116 123
http://www.samaritans.org

SANELine

Phone: 0300 304 7000
http://www.sane.org.uk/support

Eire (Ireland)

Samaritans

Phone: 1850 60 90 90
http://www.samaritans.org

USA

Kristin Brooks Hope Center

Phone: 1-800-SUICIDE
http://imalive.org

National Suicide Prevention Lifeline

Phone: 1-800-273-TALK
http://www.suicidepreventionlifeline.org/

Leave a Reply