Radio and TV Broadcast commentary Archive

How can social media keep itself socially sane?

BroadcastFacebook login page

Four Corners (ABC Australia) – Inside Facebook

iView – Click to view

Transcript

My Comments

I had just watched the Four Corners “Inside Facebook” episode on ABC TV Australia which touched on the issues and impact that Facebook was having concerning content that is made available on that platform. It was in relationship to recent questions concerning the Silicon Valley social-media and content-aggregation giants and what is their responsibility regarding content made available by their users.

I also saw the concepts that were raised in this episode coming to the fore over the past few weeks with the InfoWars conspiracy-theory site saga that was boiling over in the USA. There, concern was being raised about the vitriol that the InfoWars site was posting up especially in relationship to recent school shootings in that country. At the current time, podcast-content directories like Spotify and Apple iTunes were pulling podcasts generated by that site while

The telecast highlighted how the content moderation staff contracted by Facebook were handling questionable content like self-harm, bullying and hate speech.

For most of the time, Facebook took a content-moderation approach where the bare minimum action was required to deal with questionable content. This was because if they took a heavy-handed approach to censoring content that appeared on the platform, end-users would be drifting away from it. But recent scandals and issues like the Cambridge Analytica scandal and the allegations regarding fake news have been bringing Facebook on edge regarding this topic.

Drawing attention to and handling questionable content

At the moment, Facebook are outsourcing most of the content-moderation work to outside agencies and have been very secretive about how this is done. But the content-moderation workflow is achieved on a reactive basis in response to other Facebook users using the “report” function in the user-interface to draw their attention to questionable content.

This is very different to managing a small blog or forum which is something one person or a small number of people could do thanks to the small amount of traffic that these small Web presences could manage. Here, Facebook is having to engage these content-moderation agencies to be able to work at the large scale that they are working at.

The ability to report questionable content, especially abusive content, is compounded by a weak user-experience that is offered for reporting this kind of content. It is more so where Facebook is used on a user interface that is less than the full Web-based user experience such as some native mobile-platform apps.

This is because, in most democratic countries, social media unlike traditional broadcast media is not subject to government oversight and regulation. Nor is it subject to oversight by “press councils” like what would happen with traditional print media.

Handling content

When a moderator is faced with content that is identified as having graphic violence, they have the option to ignore the content – leave it as is on the platform, delete the content – remove it from the platform, or mark as disturbing – the content is subject to restrictions regarding who can see the content and how it is presented including a warning notice that requires the user to click on the notice before the content is shown. As well, they can notify the publisher who put up the content about the content and the action that has been done with it. In some cases, the content being “marked as disturbing” may be a method used to raise common awareness about the situation being portrayed in the content.

They also touched on dealing with visual content depicting child abuse. One of the factors raised is that the the more views that content depicting abuse multiplies the abuse factor against the victim of that incident.

As well, child-abuse content isn’t readily reported to law-enforcement authorities unless it is streamed live using Facebook’s live-video streaming function. This is because the video clip could be put up by someone at a prior time and on-shared by someone else or it could be a link to content already hosted somewhere else online. But Facebook and their content-moderating agencies engages child-safety experts as part of their moderating team to determine whether it should be reported to law enforcement (and which jurisdiction should handle it).

When facing content that depicts suicide, self-harm or similar situations, the moderating agencies treat these as high-priority situations. Here, if the content promotes this kind of self-destructive behaviour, it is deleted. On the other hand, other material is flagged as to show a “checkpoint” on the publisher’s Facebook user interface. This is where the user is invited to take advantage of mental-health resources local to them and are particular to their situation.

But it is a situation where the desperate Facebook user is posting this kind of content as a personal “cry for help” which isn’t healthy. Typically it is a way to let their social circle i.e. their family and friends know of their personal distress.

Another issue that has also been raised is the existence of underage accounts where children under 13 are operating a Facebook presence by lying about their age, But these accounts are only dealt with if a Facebook user draws attention to the existence of that account.

An advertising–driven platform

What was highlighted in the Four Corners telecast was that Facebook, like the other Silicon Valley social-media giants make most of their money out of on-site advertising. Here, the more engagement that end-users have with these social-media platforms, the more the advertising appears on the pages including the appearance of new ads which leads to more money made by the social media giant.

This is why some of the questionable content still exists on Facebook and similar platforms so as to increase engagement with these platforms. It is although most of us who use these platforms aren’t likely to actively seek this kind of content.

But this show hadn’t even touched on the concept of “brand safety” which is being raised in the advertising industry. This is the issue of where a brand’s image is likely to appear next to controversial content which could be seen as damaging to the brand’s reputation, and is a concept highly treasured by most consumer-facing brands maintaining the “friendly to family and business” image.

A very challenging task

Moderating staff will also find themselves in very mentally-challenging situations while they do this job because in a lot of cases, this kind of disturbing content can effectively play itself over and over again in their minds.

The hate speech quandary

The most contentious issue that Facebook, like the rest of the Social Web, is facing is hate speech. But what qualifies as hate speech and how obvious does it have to be before it has to be acted on? This broadcast drew attention initially to an Internet meme questioning “one’s (white) daughter falling in love with a black person” but doesn’t underscore an act of hatred. The factors that may be used as qualifiers may be the minority group, the role they are having in the accusation, the context of the message, along with the kind of pejorative terms used.

They are also underscoring the provision of a platform to host legitimate political debate. But Facebook can delete resources if a successful criminal action was taken against the publisher.

Facebook has a “shielded” content policy for highly-popular political pages, which is something similarly afforded to respected newspapers and government organisations; and such pages could be treated as if they are a “sacred cow”. Here, if there is an issue raised about the content, the complaint is taken to certain full-time content moderators employed directly by Facebook to determine what action should be taken.

A question that was raised in the context of hate speech was the successful criminal prosecution of alt-right activist Tommy Robinson for sub judice contempt of court in Leeds, UK. Here, he had used Facebook to make a live broadcast about a criminal trial in progress as part of his far-right agenda. But Twitter had taken down the offending content while Facebook didn’t act on the material. From further personal research on extant media coverage, he had committed a similar contempt-of-court offence in Canterbury, UK, thus underscoring a similar modus operandi.

A core comment that was raised about Facebook and the Social Web is that the more open the platform, the more likely one is to see inappropriate unpleasant socially-undesirable content on that platform.

But Facebook have been running a public-relations campaign regarding cleaning up its act in relation to the quality of content that exists on the platform. This is in response to the many inquiries it has been facing from governments regarding fake news, political interference, hate speech and other questionable content and practices.

Although Facebook is the common social-media platform in use, the issues draw out regarding the posting of inappropriate content also affect other social-media platforms and, to some extent, other open freely-accessible publishing platforms like YouTube. There is also the fact that these platforms can be used to link to content already hosted on other Websites like those facilitated by cheap or free Web-hosting services.

There may be some issues that I have covered in this article that may concern you or someone else using Facebook. Here are some

Australia

Lifeline

Phone: 13 11 14
http://lifeline.org.au

Beyond Blue

Phone: 1300 22 46 36
http://beyondblue.org.au

New Zealand

Lifeline

Phone: 0800 543 354

Depression Helpline

Phone: 0800 111 757

United Kingdom

Samaritans

Phone: 116 123
http://www.samaritans.org

SANELine

Phone: 0300 304 7000
http://www.sane.org.uk/support

Eire (Ireland)

Samaritans

Phone: 1850 60 90 90
http://www.samaritans.org

USA

Kristin Brooks Hope Center

Phone: 1-800-SUICIDE
http://imalive.org

National Suicide Prevention Lifeline

Phone: 1-800-273-TALK
http://www.suicidepreventionlifeline.org/

Send to Kindle

Spirit Internet to provide infrastructure-level competition in Geelong

Article – From the horse’s mouth

Cunningham Pier, Geelong, Australia by Bernard Spragg. NZ from Christchurch, New Zealand (Cunningham Pier. Geelong Vic.) [CC0], via Wikimedia Commons

Spirit Telecom to introduce infrastructure-level competition for next-generation broadband to Geelong

Spirit Telecom

Hey Geelong – Did you hear us on the radio? (Interview broadcast on Radio Bay FM)

My Comments

Recently, Radio Bay FM in Geelong broadcast an interview about Spirit Telecom setting up shop in this regional boom-city. Here, Roxie Bennett interviewed Spirit Telecom’s managing director Geoff Neate about the pending arrival of their independent infrastructure setup as part of her lifestyle segment broadcast.

Spirit Telecom ahs been established since 2005 and has provided infrastructure-level competition for broadband Internet service in some of Melbourne’s inner neighbourhoods. Here, households and businesses who sign up with Spirit have access to simultaneous ultra-high-speed bandwidth thanks to use of Ethernet cabling within the buildings and a fibre-optic network for the last-mile connection to the building.

But Spirit is intending to roll this infrastructure out to Geelong with the first development that will benefit being the Federal Mills regional technology hub, an example of the new economic direction for that city. Let’s not forget that Geelong is starting to take on high-rise development within its CBD, which could open the door for Spirit Telecom to wire up the new developments for Ethernet-based FTTB next-generation broadband. It is in conjunction with Spirit Telecom’s other efforts to reach other Australian cities to provide developers, building owners and businesses a viable high-quality alternative to the NBN.

This broadcast is a sign of the times because it has highlighted the slowpoke effort that NBN has taken with providing a reliable next-generation broadband service in most of built-up Australia. There was even an on-air “dig” cast at NBN because of the delay in rolling out broadband in to that city.

Personally, I see Spirit Telecom’s effort in running their own infrastructure and high-quality next-generation broadband Internet as something that will “put a rocket up” NBN to roll out infrastructure in to that city/

Send to Kindle

The issue of cybercrime now reaches the national level

Article (Broadcast transcript)

HACKED! – Four Corners (ABC) Video and transcript through this link

Previous coverage on HomeNetworking01.info

Interview and Presentation–Security Issues associated with cloud-based computing (Interview with Alastair MacGibbon and Brahman Thyagalingham )

Symantec Symposium 2012 – My Observations From This Event

My Comments

I had watched the Four Corners “Hacked” broadcast concerning data security and cyber espionage, which encompassed the issue of the cyber attacks affecting nations as a whole.

The show had touched on a few key points, some of which were raised in the previous events that I attended. Here, it underscored the factor of hacking being part of espionage by nation-states like China. The targets of this espionage were intellectual-property belonging to private-sector companies or government departments, especially where military information was involved.

Example incidents include the recent theft of blueprints for ASIO’s new offices along with a cyber attack against Codan who is an electronics supplier to Australian and allied defence forces. The tactics that were used against Codan included use of a public-access Wi-Fi network to install malware on a laptop belonging to a representative of that company when they visited China, along with a “spear-phishing” attack on their email. It also underscored the fact that it is not the entity’s computer networks that are at risk but the “crown jewels” i.e. the key intellectual property that belongs to the entity.

The same show also underscored the use of malware to target essential-services systems like a nuclear enrichment plant in Iran and an Indian telecommunications satellite. Here, they raised the spectre of electricity grids, telecommunications backbones and similar infrastructure being targeted by sophisticated cyber attacks. This becomes more real as most essential-services systems become computer-controlled and connected to the Internet and I would like to see the issue of these systems designed with fail-safe operation in mind such as working offline and providing the core services at known specifications if things go wrong online.

Later on in this show, Alastair MacGibbon had called for the Australian government to require businesses and other organisations to publicly disclose cyber attacks and wanted this across the board for all entities. This was previously underscored by him through the interview and presentation where he described Australia’s data protection laws as being careless as typical of the “She’ll Be Right” nation.

The Australian Government had improved their data-protection laws by tabling bills that require cyber-attack disclosure on the larger public companies rather than all companies.

As well, the issue of cyber espionage by nation-states was being considered as the equivalent of wartime activities like nuclear war and treatment of civillians and needed to be tackled on an international level in a similar way that other similar wartime activities have been dealt with. Personally, I see the latest cyber-attacks, especially those emanating from countries that were behind the Iron Curtain, as the makings of another “Cold War” and these have to be treated accordingly.

Send to Kindle