Category: Social issues involving home computing

Reverse image searching–a very useful tool for verifying the authenticity of content

Tineye reverse image search

Tineye – one of the most popular and useful reverse image search tools

Article

How To Do A Reverse Image Search From Your Phone | PCMag

My Comments and further information

Increasingly, most of us who regularly interact with the Internet will be encouraged to perform reverse-image searches.

This is where you use an image you supply or reference as a search term for the same or similar images on other Internet resources. It can also be about identifying a person or other object that is in the image.

Increasingly this is being used by people who engage in online dating to verify the authenticity of the person whom they “hit” on in an online-dating or social-media platform. It is due to romance scams where “catfishing” (pretending to be someone else in order to attract people of a particular kind) is part of the game. Here, part of the modus operandi is for the perpetrator to steal pictures of other people that match a particular look from photo-sharing or social-media sites and use these images in their profile.

It also is being used as a way to verify the authenticity of a product being offered for sale through an online second-hand-goods marketplace like eBay, Craigslist or Gumtree. It also extends to short-term house rentals including AirBnB where the potential tenant wants to verify the authenticity of the premises that is available to let.

As well, reverse image searching is being considered more relevant when it comes to checking the veracity of a news item that is posted online. This is very important in the era of fake news and disinformation where online images including doctored images are being used to corroborate questionable news articles.

How do you do a reverse image search?

At the moment, there are a few reverse-image-search engines that are available to use by the ordinary computer user. These include Tineye, Google Image Search, Bing Visual Search, Yandex’s image search function and Social Catfish’s reverse-image-search function.

Dell Inspiron 14 5000 2-in-1 at Rydges Melbourne (Locanda)

A regular computer like this Dell Inspiron 14 5000 2-in-1 makes it easier to do a reverse image search thanks to established operating system and browser code and its user interface.

The process of using these services involves you uploading the image to the service including using “copy-and-paste” techniques or passing the image’s URL to an address box in the search engine’s user interface. The latter method implies a “search-by-reference” method with the reverse-image-search site loading the image associated with that link into itself as its search term.

Using a regular desktop or laptop computer that runs the common desktop operating systems makes this job easier. This is because the browsers offered on these platforms implement tabs or allow multiple sessions so you can run the site in question in one tab or window and one or two reverse-image-search engines in other tabs or windows.

These operating systems also maintain well-developed file systems and copy-paste transfer algorithms that facilitate the transfer of URLs or image data to these reverse-image-search engines. That will also apply if you are dealing with a native app for that online service such as the client app offered by Facebook or LinkedIn for Windows. As well, Chrome and Firefox provide drag-and-drop support so you can drag the image from that Tinder or Facebook profile in one browser session to Tineye running in the other browser session.

But mobile users may find this process very daunting. Typically it requires the site to be opened and logged in to in Chrome or Safari then opened as a desktop version which is the equivalent of viewing it on a regular computer. For Chrome, you have to tap on the three-dot menu and select “Request Desktop Site”. For Safari, you have to tap the upward-facing arrow to show the “desktop view” option and select that option.

Then you open the image in a new tab and copy the image’s URL from the address bar. That is before you visit Google Image Search or Tineye to paste the URL in that app’s interface.

Google has built in to recent mobile versions of Chrome a shortcut to their reverse-image-search function. Here, you “dwell” on the image with your finger to expose a pop-up menu which has the “Search Google For This Image” option. The Bing app has the ability for you to upload images or screenshots for searching.

Share option in Google Chrome on Android

Share option in Google Chrome on Android

If you use an app like the Facebook, Instagram or Tinder mobile clients, you may have to take a screenshot of the image you want to search on. Recent iOS and Android versions also provide the ability to edit a screenshot before you save it thus cutting out the unnecessary user-interface stuff from what you want to submit. Then you open up Tineye or Google Image Search in your browser and upload the image to the reverse-image-search engine.

How can reverse image searching on the mobile platforms be improved

What can be done to facilitate reverse image searching on the mobile platforms is for reverse-image-search engines to create lightweight apps for each mobile platform. This app would make use of the mobile platform’s “Share” function for you to upload the image or its URL to the reverse-image-search engine as a search term. Then the app would show you the results of your search through a native interface or a view of the appropriate Web interface.

Share dialog on Android

A reverse-image-search tool like Tineye could be a share-to destination for mobile platforms like iOS or Android

Why have this app work as a “share to” destination? This is because most mobile-platform apps and Web browsers make use of the “share to” function as a way to take a local or online resource further. It doesn’t matter whether it is to send to someone else via a messaging platform including email; obtain a printout or, in some cases, stream it on the big screen via AirPlay or Chromecast.

The lightweight mobile app that works with a reverse-image-search engine answers the reality that most of us use smartphones or mobile-platform tablets for personal online activity. This is more so with social media, online dating and online news sources, thanks to the “personal” size of these devices.

Conclusion

What is becoming real is reverse image searching, whether of particular images or Webpages, is being seen as important for our security and privacy and for our society’s stability.

NewsGuard to indicate online news sources’ trustworthiness

Articles

Untrustworthy news sites could be flagged automatically in UK | The Guardian

From the horse’s mouth

NewsGuard

Home Page

My Comments

Google News screenshot

Google News – one of the way we are reading our news nowadays

Since 2016 with the Brexit referendum and the US Presidential Election that caused outcomes that were “off the beaten track”, a strong conversation has risen up about the quality of news sources, especially online sources.

This is because most of us are gaining our news through online resources like online-news aggregators like Google News, search engines like Google or Bing, or social networks like Facebook or Twitter. It is while traditional media like the newspapers, radio or TV are being seen by younger generations as irrelevant which is leading to these outlets reducing the staff numbers in their newsrooms or even shutting down newsrooms completely.

What has been found is that this reliance on online news and information has had us become more susceptible to fake news, disinformation and propaganda which has been found to distort election outcomes and draw in populist political outcomes.

Increasingly we are seeing the rise of fact-checking groups that are operated by newsrooms and universities who verify the kind of information that is being run as news. We are also seeing the electoral authorities like the Australian Electoral Commission engage in public-education campaigns regarding what we pass around on social media. This is while the Silicon-Valley platforms are taking steps to deal with fake news and propaganda by maintaining robust account management and system-security policies, sustaining strong end-user feedback loops, engaging with the abovementioned fact-check organisations and disallowing monetisation for sites and apps that spread misinformation.

Let’s not forget that libraries and the education sector are taking action to encourage media literacy amongst students and library patrons. With this site, I even wrote articles about being aware of fake news and misinformation during the run-up to the UK general election and the critical general elections in Australia i.e. the NSW and Victoria state elections and the Federal election which were running consecutively over six months.

Google News on Chrome with NewsGuard in place

NewsGuard highlighting the credibility of online news sources it knows about on Google News

But a group of journalists recently worked on an online resource to make it easy for end-users to verify the authenticity and trustworthiness of online news resources. NewsGuard, by which this resource is named, assesses the online news resources on factors like the frequency it runs with false content; responsible gathering and presentation of information; distinguishing between news and opinion / commentary; use of deceptive headlines and proper error handling. Even factors that affect transparency like ownership and financing of the resource including ideological or political leanings of those in effective control; who has effective control and any possible conflicts of interest; distinction between editorial and advertising / paid content; and the names of the content creators and their contact or biographical information.

NewsGuard in action on Google Chrome - detail with the Guardian

The NewsGuard “pilot light” on Chrome’s address bar indicating the trustworthiness of a news site

End-users can use a plug-in or extension for the popular desktop browsers which will insert a “shield” behind a Weblink to a news resource indicating whether it is credible or not, including whether you are simply dealing with a platform or general-info site or a satire page. They can click on the shield icon to see more about the resource and this resource is even described in an analogous form to a nutrition label on packaged foodstuffs.

For the Google Chrome extension, there is also the shield which appears on the address bar and changes colour according to how the Web resource you are reading has been assessed by NewsGuard. It is effectively like a “pilot light” on a piece of equipment that indicates the equipment’s status such as when a sandwich toaster is on or has heated up fully.

NewsGuard basic details screen about the news site you are viewing

Basic details being shown about the trrustworthiness of online news site if you click on NewsGuard “pilot light”

It is also part of the package for the iOS and Android versions of Microsoft Edge but it will take time for other mobile browsers to provide this as an option.

NewsGuard is a free service with it gaining a significant amount of funding from the Microsoft’s Defending Democracy program. This is a program that is about protecting democratic values like honest and fair elections.

It is also being pitched towards the online advertising industry as a tool to achieve a brand-safe environment for brands and advertisers who don’t want anything to do with fake news and disinformation. This will be positioned as a licensable data source and application-programming interface for this user group to benefit from. Libraries, educational facilities, students and parents are also being encouraged to benefit from the NewsGuard browser add-ons as part of their media-literacy program and curriculum resources.

Detailed "Nutrition Label" report from NewsGuard about The Guardian

Click further to see a detailed “nutrition label” report about the quality and trustworthiness of that online news resource

But I see it also of benefit towards small newsrooms like music radio stations who want to maintain some credibility in their national or international news coverage. Here, they can make sure that they use news from trusted media resources for their news output like the “top-of-the-hour” newscast. Students, researchers, bloggers and similar users may find this of use to make sure that any media coverage that they cite are from trustworthy sources.

The UK government are even considering this tool as a “must-have” for Internet service providers to provide so that British citizens are easily warned about fake news and propaganda. It is in the same approach to how users there can have their ISPs provide a family-friendly “clean feed” free of pornography or hate speech.

It is now being rolled out around the rest of Europe with France and Italy already on board with this service for their mastheads. Germany is yet to come on board but it could be a feasible way to have other countries speaking the same language climbing on board very quickly such as having Germany, Austria and Switzerland come on board very quickly once German presence is established.

As NewsGuard rolls out around the world, it could effectively become one of the main “go-to” points to perform due-diligence research on that news outlet or its content. It will also become very relevant as our news and information is delivered through podcasts and Internet-delivered radio and TV broadcasts or we use Internet-connected devices to receive our news and information.

Australian media raises the issue of fake celebrity and brand endorsements

Article

Event page for spammy Facebook event

Facebook is one of many online platforms being used for fake celebrity and brand endorsements

Networks warn of fake ads, scams. | TV Tonight

Media Watch broadcast on this topic | ABC

My Comments

An issue that has been called out at the end of April this year is the improper use of endorsements by celebrities and brands by online snake-oil salesmen.

ABC’s Media Watch and TV Tonight talked of this situation appearing on Facebook and other online advertising platforms. Typically the people and entities being affected were household names associated with the “screen of respect” in the household i.e. the TV screen in the lounge room. It ranged from the free-to-air broadcasters themselves including the ABC who adheres strictly to the principles established by the BBC about endorsement of commercial goods and services, as well as TV shows like “The Project” or “Sunrise”, or TV’s key personalities like Eddie McGuire and Jessica Rowe.

Lifehacker Website

…. as are online advertising platforms

Typically the ads containing the fake endorsements would appear as part of Facebook’s News Feed or in Google’s advertising networks, especially the search-driven Adwords network. I also see this as being of risk with other online ad networks that operate on a self-serve process and offer low-risk high-return advertising packages such as “cost-per-click-only” deals and had called this out in an earlier article about malvertisement activity.

There has been recent investigation activity by the Australian Competition and Consumer Commission concerning the behaviour of the Silicon Valley online-media giants and their impact on traditional media around the world. It will also include issues relating to Google and its control over online search and display advertising.

Facebook have been engaging in efforts to combat spam, inauthentic account behaviour and similar activity across its social-network brands. But they have found that it is a “whack-a-mole” effort where other similar sites or the same site pops up even if they shut it down successfully. I would suspect that a lot of these situations are based around pages or ads linking to a Website hosted somewhere on the Internet.

A question that was raised regarding this kind of behaviour is whether Facebook, Google and others should be making money out of these scam ads that come across their online platforms. This question would extend to the “estate agents” and “landlords” of cyberspace i.e. the domain-name brokers and the Webhosts who offer domain names or Webhosting space to people to use for their online presence.

There is also the idea of maintaining a respectable brand-safe family-and-workplace-friendly media experience in the online world which would be very difficult. This issue affects both the advertisers who want to work in a respectable brand-safe environment along with online publishers who don’t want their publications to convey a downmarket image especially if the invest time and money in creating quality content.

As we see more ad-funded online content appear, there will be the call by brands, publishers and users to gain control over the advertising ecosystem to keep scam advertising along with malvertisements at bay along with working against ad fraud. It will also include verifying the legitimacy of any endorsements that are associated with a brand or personality.

A good practice for advertisers and publishers in the online space would be to keep tabs on the online advertising beheaviour that is taking place. For example, an advertiser can keep reporting questionable impressions of their advertising campaigns including improper endorsement activity while a publisher can report ads for fly-by-night activity that appear in their advertising space to the ad networks they use. Or users could report questionable ads on the Social Web to the various social network platforms they see them appear on.

Australian Electoral Commission weighs in on online misinformation

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Australian Electoral Commission boots online blitz to counter fake news | ITNews

Previous coverage

Being cautious about fake news and misinformation in Australia

From the horse’s mouth

Australian Electoral Commission

Awareness Page

Press Release

My Comments

I regularly cover the issue of fake news and misinformation especially when this happens around election cycles. This is because it can be used as a way to effectively distort what makes up a democratically-elected government.

When the Victorian state government went to the polls last year, I ran an article about the issue of fake news and how we can defend ourselves against it during election time. This was because of Australia hosting a run of elections that are ripe for a concerted fake-news campaign – state elections for the two most-populous states in the country and a federal election.

It is being seen as of importance due to fact that the IT systems maintained by the Australian Parliament House and the main Australian political parties fell victim to a cyber attack close to February 2019 with this hack being attributed to a nation-state. This can lead to the discovered information being weaponised against the candidates or their political parties similar to the email attack against the Democrat party in the USA during early 2016 which skewed the US election towards Donald Trump and America towards a highly-divided nation.

The issue of fake news, misinformation and propaganda has been on our lips over the last few years due to us switching away from traditional news-media sources to social media and online search and news-aggregation sites. Similarly, the size of well-respected newsrooms is becoming smaller due to reduced circulation and ratings for newspapers and TV/radio stations driven by our use of online resources. This leads to poorer-quality news reporting that is a similar standard to entertainment-focused media like music radio.

A simplified low-cost no-questions-asked path has been facilitated by personal computing and the Internet to create and present material, some of which can be questionable. It is now augmented by the ability to create deepfake image and audio-visual content that uses still images, audio or video clips to represent a very convincing falsehood thanks to artificial-intelligence. Then this content can be easily promoted through popular social-media platforms or paid positioning in search engines.

Such content takes advantage of the border-free nature of the Internet to allow for an actor in one jurisdiction to target others in another jurisdiction without oversight of the various election-oversight or other authorities in either jurisdiction.

I mentioned what Silicon Valley’s online platforms are doing in relation to this problem such as restricting access to online advertising networks; interlinking with fact-check organisations to identify fake news; maintaining a strong feedback loop with end-users; and operating robust user-account-management and system-security policies, procedures and protocols. Extant newsrooms are even offering fact-check services to end-users, online services and election-oversight authorities to build up a defence against misinformation.

But the Australian Electoral Commission is taking action through a public-education campaign regarding fake news and misinformation during the Federal election. They outlined that their legal remit doesn’t cover the truthfulness of news content but it outlines whether the information comes from a reliable or recognised source, how current it is and whether it could be a scam. Of course there is the issue of cross-border jurisdictional issues especially where material comes in from overseas sources.

They outlined that their remit covers the “authorisation” or provenance of the electoral communications that appear through advertising platforms. As well, they underscore the role of other Australian government agencies like the Australian Competition and Consumer Commission who oversee advertising issues and the Australian Communications And Media Authority who oversee broadcast media. They also have provided links to the feedback and terms-and-conditions pages of the main online services in relationship to this issue.

These Federal agencies are also working on the issue of electoral integrity in the context of advertising and other communication to the voters by candidates, political parties or other entities; along with the “elephant in the room” that is foreign interference; and security of these polls including cyber-security.

But what I have outlined in the previous coverage is to look for information that qualifies the kind of story being published especially if you use a search engine or aggregated news view; to trust your “gut reaction” to the information being shared especially if it is out-of-touch with reality or is sensationalist or lurid; checking the facts against established media that you trust or other trusted resources; or even checking for facts “from the horse’s mouth” such as official press releases.

Inspecting the URL in your Web browser’s address bar before the first “/” to see if there is more that what is expected for a news source’s Web site can also pay dividends. But this can be a difficult task if you are using your smartphone or a similarly-difficult user interface.

I also even encourage making more use of established trusted news sources including their online presence as a primary news source during these critical times. Even the simple act of picking up and reading that newspaper or turning on the radio or telly can be a step towards authoritative news sources.

As well, I also encourage the use of the reporting functionality or feedback loop offered by social media platforms, search engines or other online services to draw attention to contravening content This was an action I took as a publisher regarding an ad that appeared on this site which had the kind of sensationalist headline that is associated with fake news.

The issue of online misinformation especially during general elections is still a valid concern. This is more so where the online space is not subject to the kinds of regulation associated with traditional media in one’s home country and it becomes easy for foreign operators to launch campaigns to target other countries. What needs to happen is a strong information-sharing protocol in order to place public and private stakeholders on alert about potential election manipulation.

WhatsApp now highlights messaging services as a fake-news vector

Articles

WhatsApp debuts fact-checking service to counter fake news in India | Engadget

India: WhatsApp launches fact-check service to fight fake news | Al Jazeera

From the horse’s mouth

WhatsApp

Tips to help prevent the spread of rumors and fake news {User Advice)

Video – Click or tap to play

My Comments

As old as the World-Wide-Web has been, email has been used as a way to share online news amongst people in your social circle.

Typically this has shown up in the form of jokes, articles and the like appearing in your email inbox from friends, colleagues or relatives, sometimes with these articles forwarded on from someone else. It also has been simplified through the ability to add multiple contacts from your contact list to the “To”, “Cc” or “Bcc” fields in the email form or create contact lists or “virtual contacts” from multiple contacts.

The various instant-messaging platforms have also become a vector to share links to articles hosted somewhere on the Internet in the same manner as email, as has the carrier-based SMS and MMS texting platforms when used with a smartphone.

But the concern raised about the distribution of misinformation and fake news has been focused on the popular social media and image / video sharing platforms. This is while fake news and misinformation creep in to your Inbox or instant-messaging client thanks to one or more of your friends who like passing on this kind of information.

WhatsApp, a secure instant-messaging platform owned by Facebook, is starting to tackle this issue head-on with its Indian userbase as that country enters its election cycle for the main general elections. They are picking up on the issue of fake news and misinformation thanks to the Facebook Group being brought in to the public limelight due to this issue. As well, Facebook have been recently clamping down on inauthentic behaviour that was targeting India and Pakistan.

WhatsApp now highlighting fake news problem in India, especially as this platform is seen as a popular instant-messenger within that country. They are working with a local fact-checking startup called Proto to create the Checkpoint Tipline to allow users to have links that are sent to them verified. It is driven on the base of a “virtual contact” that the WhatsApp users forward questionable links or imagery to.

But due to the nature of its end-to-end encryption and the fact that service is purely a messaging service, there isn’t the ability to verify or highlight questionable content. But they also have placed limits on the number of users one can broadcast a message to in order to tame the spread of rumours.

It is also being used as a tool to identify the level of fake news and misinformation taking place on the messenger platform and to see how much of a vector these platforms are.

Personally, I would like to see the various fact-checking agencies have an email mailbox where you can forward emails with questionable links and imagery to so they can verify that rumour mail doing the rounds. It could operate in a similar vein to how the banks, tax offices and the like have set up mailboxes for people to forward phishing email to so these organisations can be aware of the phishing problem they are facing.

The only problem with this kind of service is that people who are astute and savvy are more likely to use it. This may not affect those of us who just end up passing on whatever comes our way.

Constance Hall puts trolling and bullying in the TV spotlight on Dancing With The Stars

Article

‘It hurts me so much’: Constance Hall targeted by trolls after reality TV announcement | Sydney Morning Herald

Dancing with the Stars: Constance Hall is ready to rumba! Here’s what you need to know about her | NowToLove.com

Video – Click or tap to play (Facebook page)

>

Previous Coverage on HomeNetworking01.info

Dealing with Internet trolls

Useful Resources

Crash Override Network – A resource centre based in the USA focused on online-abuse issues.

My Comments

Constance Hall, an online personality who has run a blog and is maintaining a Facebook public presence, is participating in the latest Dancing With The Stars season on Ten Network Australia. But because she had decided to star in this popular dancing talent-quest TV show, she got a lot of online abuse from various trolls. She often copped this abuse in her online presence due to how she looks, her outspokenness or her alternative lifestyle.

I have seen this happen with two of the contestants in MasterChef Australia season 10. One of them was accused of being close to George Calombaris because she had him taste a sample of something she was preparing before cooking it in quantity for the contest, while the other who was a nutritionist was turning out desserts which went against the grain of someone whose profession was about “clean eating”.

Even a few years ago, I observed a situation of online abuse directed at a cafe I was visiting because they wouldn’t accept the placement of a protest group’s campaign flyers near their till. It was while their neighbourhood was effectively being divided by the potential presence of a McDonalds fast-food restaurant with this protest group against the proposed development. I even defended that they had the right to defend their space but they even had to effectively shut down the commenting ability on their Facebook presence.

This kind of bullying has become very toxic with the Gamergate saga which was an attack on female game developers and female gaming journalists. This situation got to a point where there were death threats against one of the game developers along with the publication of her home address and phone number.

Typically this can be about a perverse innuendo about intimate relationships involving one or more of the victims; that the victim doesn’t “fit the mould” expected of them; or that they are “taking the wrong side” on an issue.

But Constance Hall produced a Facebook video addressing this kind of behaviour in the online space. Here, it was about stopping the acceptance and normalisation of online bullying and she had related it to what happens to children and teenagers. This video was even played as part of the introductory video package that preceded her dance routine in Dancing With The Stars. This meant that the issues being raised in the video had a good chance of being aired on prime-time traditional TV.

It is also part of her personal campaign to reach out to and encourage teenagers and other young people who are at risk of being bullied during their life’s journey especially in the online context.

A good practice to deal with trolling in an online environment would be to “insert” some common-sense in to the conversation. It may be best to approach it in a neutral form without appearing to take sides.

If it is getting out of control, most social-media platforms and some other online environments have the ability to “mute” participants or “hide” conversation threads so you don’t have them in your view. Social-media platforms also have the ability to block participants so they can’t follow you. As well, you may also have to report offensive behaviour to the online environment that it’s occurring in if it is becoming consistent.

If the online environment has the ability for users to upvote or downvote comments or threads, it can be used as a way to bury questionable comments. It is a feature that has appeared in some commenting platforms like Disqus or some online forum software, but is slowly being rolled out to major social media platforms like Facebook.

I applaud Constance Hall for how she has turned a negative experience around for something positive as well as underscoring a “you can do it” approach. This is more so for people who are or are likely to become an online personality who can easily fall victim to the ugly side of the Internet.

Being cautious about fake news and misinformation in Australia

Previous Coverage

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Being aware of fake news in the UK

Fact-checking now part of the online media-aggregation function

Useful Australian-based resources

ABC Fact Check – ran in conjunction with RMIT University

Political Parties

Australian Labor Party (VIC, NSW)

Liberal Party – work as a coalition with National Party (VIC, NSW)

National Party – work as a coalition with Liberal Party (VIC, NSW)

Australian Greens – state branches link from main page

One Nation (Pauline Hanson)

Katter’s Australia Party

Derryn Hinch’s Justice Party

Australian Conservatives

Liberal Democratic Party

United Australia Party

My Comments

Over the next six months, Australia will see some very critical general elections come to pass both on a federal level and in the two most-highly-populated states that host most of that country’s economic and political activity. On October 30 2018, the election writs were recently served in the state of Victoria for its general election to take place on November 24 2018. Then, on the 23 March 2019, New South Wales will expect to go to the polls for its general election. Then the whole country will expect to go to the polls for the federal general election by 18 May 2019.

As these election cycles take place over a relatively short space of time and affecting , there is a high risk that Australians could fall victim to misinformation campaigns. This can subsequently lead to state and federal ballots being cast that steer the country against the grain like what happened in 2016 with the USA voting in Donald Trump as their President and the UK voting to leave the European Union.

Google News - desktop Web view

Look for tags within Google News that describe the context of the story

The issue of fake news and misinformation is being seen as increasingly relevant as we switch away from traditional media towards social media and our smartphones, tablets and computers for our daily news consumption.  This is thanks to the use of online search and news-aggregation services like Google News; or social media like Facebook or Twitter which can be seen by most of us as an “at-a-glance” view of the news.

As well, a significant number of well-known newsrooms are becoming smaller due to the reduced circulation and ratings for their newspaper or radio / TV broadcast thanks to the use of online resources for our news. It can subsequently lead to poor-quality news reporting and presentation with a calibre equivalent to the hourly news bulletin offered by a music-focused radio station. It also leads to various mastheads plagiarising content from other newsrooms that place more value on their reporting.

The availability of low-cost or free no-questions-asked Web and video hosting along with easy-to-use Web-authoring, desktop-publishing and desktop-video platforms make it feasible for most people to create a Web site or online video channel. It has led to an increased number of Websites and video channels that yield propaganda and information that is dressed up as news but with questionable accuracy.

Another factor that has recently been raised in the context of fake news, misinformation and propaganda is the creation and use of deepfake image and audio-visual content. This is where still images, audio or video clips that are in the digital domain are altered to show a falsehood using artificial-intelligence technology in order to convince viewers that they are dealing with original audio-visual resource. The audio content can be made to mimic an actual speaker’s voice and intonation as part of creating a deepfake soundbite or video clip.

It then becomes easy to place fake news, propaganda and misinformation onto easily-accessible Web hosts including YouTube in the case of videos. Then this content would be propagated around the Internet through the likes of Twitter, Facebook or online bulletin boards. It is more so if this content supports our beliefs and enhances the so-called “filter bubble” associated with our beliefs and media use.

There is also the fact that newsrooms without the resources to rigorously scrutinise incoming news could pick this kind of content up and publish or broadcast this content. This can also be magnified with media that engages in tabloid journalism that depends on sensationalism to get the readership or keep listeners and viewers from switching away.

The borderless nature of the Internet makes it easy to set up presence in one jurisdiction to target the citizens of another jurisdiction in a manner to avoid being caught by that jurisdiction’s election-oversight, broadcast-standards or advertising-standards authority. Along with that, a significant number of jurisdictions focus their political-advertising regulation towards the traditional media platforms even though we are making more use of online platforms.

Recently, the Australian Electoral Commission along with the Department of Home Affairs, Australian Federal Police and ASIO have taken action on an Electoral Integrity Assurance Task Force. It was in advance of recent federal byelections such as the Super Saturday byelections, where there was the risk of clandestine foreign interference taking place that could affect the integrity of those polls.

But the issue I am drawing attention to here is the use of social media or other online resources to run fake-news campaigns to sway the populace’s opinion for or against certain politicians. This is exacerbated by the use of under-resourced newsrooms that could get such material seen as credible in the public’s eyes.

But most of Silicon Valley’s online platforms are taking various steps to counter fake news, propaganda and disinformation using these following steps.

Firstly, they are turning off the money-supply tap by keeping their online advertising networks away from sites or apps that spread misinformation.

They also are engaging with various fact-check organisations to identify fake news that is doing the rounds and tuning their search and trending-articles algorithms to bury this kind of content.

Autocomplete list in Google Search Web user interface

Google users can report Autocomplete suggestions that they come across in their search-engine experience/

They are also maintaining a feedback loop with their end-users by allowing them to report fake-news entries in their home page or default view. This includes search results or autocomplete entries in Google’s search-engine user interface. This is facilitated through a “report this” option that is part of the service’s user interface or help pages.

Most of the social networks and online-advertising services are also implementing robust user-account-management and system-security protocols. This includes eliminating or suspending accounts that are used for misinformation. It also includes checking the authenticity of accounts running pages or advertising campaigns that are politically-targeted through methods like street-address verification.

In the case of political content, social networks and online-advertising networks are implementing easily-accessible archives of all political advertising or material that is being published including where the material is being targeted at.

ABC FactCheck – the ABC’s fact-checking resource that is part of their newsroom

Initially these efforts are taking place within the USA but Silicon Valley is rolling them out across the world at varying timeframes and with local adaptations.

Personally, I would still like to see a strong dialogue between the various Social Web, search, online-advertising and other online platforms; and the various government and non-government entities overseeing election and campaign integrity and allied issues. This can be about oversight and standards regarding political communications in the online space along with data security for each stakeholder.

What can you do?

Look for any information that qualifies the kind of story if you are viewing a collection of headlines like a search or news-aggregation site or app. Here you pay attention to tags or other metadata like “satire”, “fact checking” or “news” that describe the context of the story or other attributes.

Most search engines and news-aggregation Websites will show up this information in their desktop or mobile user interface and are being engineered to show a richer set of details. You may find that you have to do something extra like click a “more” icon or dwell on the heading to bring up this extra detail on some user interfaces.

Trust your gut reaction to that claim being shared around social media. You may realise that a claim associated with fake news may be out of touch with reality. Sensationalised or lurid headlines are a usual giveaway, along with missing information or copy that whips up immediate emotional responses from the reader.

Check the host Website or use a search engine like Google to see if the news sources you trust do cover that story. You may come across one or more tools that identify questionable news easily, typically in the form of a plug-in or extension that works with your browser if its functionality can be expanded with these kind of add-ons. It is something that is more established with browsers that run on regular Windows, Mac or Linux computers.

It is also a good idea to check for official press releases or similar material offered “from the horse’s mouth” by the candidates, political parties, government departments or similar organisations themselves. In some cases during elections, some of the candidates may run their own Web sites or they may run a Website that links from the political party’s Website. Here, you will find them on the Websites ran by these organisations and may indicate if you are dealing with a “beat-up” or exaggeration of the facts.

As you do your online research in to a topic, make sure that you are familiar with how the URLs are represented on your browser’s address bar for the various online resources that you visit. Here, be careful if a resource has more than is expected between the “.com”, “.gov.au” or similar domain-name ending and the first “/” leading to the actual online resource.

Kogan Internet table radio

Sometimes the good ol’ radio can be the trusted news source

You may have to rely on getting your news from one or more trusted sources. This would include the online presence offered by these sources. Or it may be about switching on the radio or telly for the news or visiting your local newsagent to get the latest newspaper.

Examples of these are: the ABC (Radio National, Local radio, News Radio, the main TV channel and News 24 TV channel), SBS TV, or the Fairfax newspapers. Some of the music radio stations that are part of a family run by a talk-radio network like the ABC with their ABC Classic FM or Triple J services will have their hourly newscast with news from that network. But be careful when dealing with tabloid journalism or commercial talkback radio because you may be exposed to unnecessary exaggeration or distortion of facts.

As well, use the social-network platform’s or search engine’s reporting functionality to draw attention to fake news, propaganda or misinformation that is being shared or highlighted on that online service. In some cases like reporting inappropriate autocomplete predictions to Google, you may have to use the platform’s help options to hunt for the necessary resources.

Here, as we Australians faces a run of general-election cycles that can be very tantalising for clandestine foreign interference, we have to be on our guard regarding fake news, propaganda and misinformation that could affect the polls.

Facebook clamps down on voter-suppression misinformation

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Facebook Extends Ban On Election Fakery To Include Lies About Voting Requirements | Gizmodo

From the horse’s mouth

Facebook

Expanding Our Policies on Voter Suppression (Press Release)

My Comments

Over recent years, misinformation and fake news has been used as a tool to attack the electoral process in order to steer the vote towards candidates or political parties preferred by powerful interests. This has been demonstrated through the UK Brexit referendum and the the USA Presidential Election in 2016 with out-of-character results emanating from the elections. It has therefore made us more sensitive to the power of misinformation and its use in influencing an election cycle, with most of us looking towards established news outlets for our political news.

Another attack on the electoral process in a democracy is the use of misinformation or intimidation to discourage people from registering on the electoral rolls including updating their electoral-roll details or turning up to vote. This underhand tactic is typically to prevent certain communities from casting votes that would sway the vote away from an area-preferred candidate.

Even Australia, with its compulsory voting and universal suffrage laws, isn’t immune from this kind of activity as demonstrated in the recent federal byelection for the Batman (now Cooper) electorate. Here, close to the election day, there was a robocall campaign targeted at older people north of the electorate who were likely to vote in an Australian Labour Party candidate rather than the area-preferred Greens candidate.

But this is a very common trick performed in the USA against minority, student or other voters to prevent them casting votes towards liberal candidates. This manifests in accusations about non-citizens casting votes or the same people casting votes in multiple electorates.

Facebook have taken further action against voter-suppression misinformation by including it in their remit against fake news and misinformation. This action has been taken as part of Silicon Valley’s efforts to work against fake news during the US midterm Congressional elections.

At the moment, this effort applies to information regarding exaggerated identification or procedural requirements concerning enrolment on the electoral rolls or casting your vote. It doesn’t yet apply to reports about conditions at the polling booths like opening hours, overcrowding or violence. Nor does this effort approach the distribution of other misinformation or propaganda to discourage enrolment and voting.

US-based Facebook end-users can use the reporting workflow to report voter-suppression posts to Facebook. This is through the use of an “Incorrect Voting Info” option that you select when reporting posted content to Facebook. Here, it will allow this kind of information to be verified by fact-checkers that are engaged by Facebook, with false content “buried” in the News Feed along with additional relevant content being supplied with the article when people discover it.

This is alongside a constant Facebook effort to detect and remove fake accounts existing on the Facebook platform along with increased political-content transparency across its advertising platforms.

As I have always said, the issue regarding misleading information that influences the election cycle can’t just be handled by social-media and advertising platforms themselves. These platforms need to work alongside the government-run electoral-oversight authorities and similar organisations that work on an international level to exchange the necessary intelligence to effectively identify and take action against electoral fraud and corruption.

Google to keep deep records of political ads served on their platforms

Articles

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote without undue influence?

Google Releases Political Ad Database and Trump Is the Big Winner | Gizmodo

From the horse’s mouth

Google

Introducing A New Transparency Report For Political Ads (Blog Post)

Transparency Report – Political Advertising On Google (Currently relevant to federal elections in the USA)

Advertising Policies Help Page – Political Advertising (Key details apply to USA Federal elections only)

My Comments

If you use YouTube as a free user or surf around the Internet to most ad-facilitated blogs and Websites like this one, you will find that the display ads hosted are provided by an ad network owned or managed by Google. Similarly, some free ad-funded mobile apps may be showing ads that are facilitated through Google’s ad networks. Similarly, some advertisers pay to have links to their online resources placed at the top of the Google search-results list.

Online ad - to be respected like advertising in printed media

Google to keep records of political ads that appear on these sites so they have the same kind of respect as traditional print ads

Over the past few years, there has been a strong conversation regarding the authenticity of political advertising on the online space thanks to the recent election-meddling and fake news scandals. This concern has been shown due to the fact that the online space easily transcends jurisdictional borders and isn’t as regulated as traditional broadcast, print and away-from-home advertising especially when it comes to political advertising.

Then there is also the fact that relatively-open publishing platforms can be used to present content of propaganda value as editorial-grade content. The discovery of this content can be facilitated through search engines and the Social Web whereupon the content can even be shared further.

Recently Facebook have taken action to require authentication of people and other entities behind ads hosted on their platforms and Pages or Public Profiles with high follower counts. This ins in conjunction to providing end-users access to archival information about ad campaigns ran on that platform. This is part of increased efforts by them and Google to gain control of political ads appearing on their platforms.

But Google have taken things further by requiring authentication and proof of legitimate residency in the USA for entities publishing political ads through Google-managed ad platforms that targeting American voters on a federal level. As well, they are keeping archival information about the political ads including the ads’ creatives, who sponsored the ad and how much is spent with Google on the campaign. They are even making available software “hooks” to this data for researchers, concerned citizens, political watchdog groups and the like to draw this data in to their IT systems for further research.

If you view a political ad in the USA on this site or other sites that use display advertising facilitated by Google, you will find out who is behind that ad if you click or tap on the blue arrow at the top right hand corner of that ad. Then you will see the disclosure details under the “Why This Ad” heading. Those of you who use YouTube can bring up this same information if you click or tap on the “i” (information) or three-dot icon while the ad is playing.

Google are intending to roll these requirements out for state-level and local-level campaigns within the USA as well as rolling out similar requirements with other countries and their sub-national jurisdictions. They also want to extend this vendor-based oversight towards issues-based political advertising which, in a lot of cases, makes up the bulk of that kind of advertising.

Personally I would also like to see Google and others who manage online ad platforms be able to “keep in the loop” with election-oversight authorities like the USA’s Federal Election Commission or the Australian Electoral Commission. Here, it can be used to identify inordinate political-donation and campaign-spending activity that political parties and others are engaging in.

How can social media keep itself socially sane?

BroadcastFacebook login page

Four Corners (ABC Australia) – Inside Facebook

iView – Click to view

Transcript

My Comments

I had just watched the Four Corners “Inside Facebook” episode on ABC TV Australia which touched on the issues and impact that Facebook was having concerning content that is made available on that platform. It was in relationship to recent questions concerning the Silicon Valley social-media and content-aggregation giants and what is their responsibility regarding content made available by their users.

I also saw the concepts that were raised in this episode coming to the fore over the past few weeks with the InfoWars conspiracy-theory site saga that was boiling over in the USA. There, concern was being raised about the vitriol that the InfoWars site was posting up especially in relationship to recent school shootings in that country. At the current time, podcast-content directories like Spotify and Apple iTunes were pulling podcasts generated by that site while

The telecast highlighted how the content moderation staff contracted by Facebook were handling questionable content like self-harm, bullying and hate speech.

For most of the time, Facebook took a content-moderation approach where the bare minimum action was required to deal with questionable content. This was because if they took a heavy-handed approach to censoring content that appeared on the platform, end-users would be drifting away from it. But recent scandals and issues like the Cambridge Analytica scandal and the allegations regarding fake news have been bringing Facebook on edge regarding this topic.

Drawing attention to and handling questionable content

At the moment, Facebook are outsourcing most of the content-moderation work to outside agencies and have been very secretive about how this is done. But the content-moderation workflow is achieved on a reactive basis in response to other Facebook users using the “report” function in the user-interface to draw their attention to questionable content.

This is very different to managing a small blog or forum which is something one person or a small number of people could do thanks to the small amount of traffic that these small Web presences could manage. Here, Facebook is having to engage these content-moderation agencies to be able to work at the large scale that they are working at.

The ability to report questionable content, especially abusive content, is compounded by a weak user-experience that is offered for reporting this kind of content. It is more so where Facebook is used on a user interface that is less than the full Web-based user experience such as some native mobile-platform apps.

This is because, in most democratic countries, social media unlike traditional broadcast media is not subject to government oversight and regulation. Nor is it subject to oversight by “press councils” like what would happen with traditional print media.

Handling content

When a moderator is faced with content that is identified as having graphic violence, they have the option to ignore the content – leave it as is on the platform, delete the content – remove it from the platform, or mark as disturbing – the content is subject to restrictions regarding who can see the content and how it is presented including a warning notice that requires the user to click on the notice before the content is shown. As well, they can notify the publisher who put up the content about the content and the action that has been done with it. In some cases, the content being “marked as disturbing” may be a method used to raise common awareness about the situation being portrayed in the content.

They also touched on dealing with visual content depicting child abuse. One of the factors raised is that the the more views that content depicting abuse multiplies the abuse factor against the victim of that incident.

As well, child-abuse content isn’t readily reported to law-enforcement authorities unless it is streamed live using Facebook’s live-video streaming function. This is because the video clip could be put up by someone at a prior time and on-shared by someone else or it could be a link to content already hosted somewhere else online. But Facebook and their content-moderating agencies engages child-safety experts as part of their moderating team to determine whether it should be reported to law enforcement (and which jurisdiction should handle it).

When facing content that depicts suicide, self-harm or similar situations, the moderating agencies treat these as high-priority situations. Here, if the content promotes this kind of self-destructive behaviour, it is deleted. On the other hand, other material is flagged as to show a “checkpoint” on the publisher’s Facebook user interface. This is where the user is invited to take advantage of mental-health resources local to them and are particular to their situation.

But it is a situation where the desperate Facebook user is posting this kind of content as a personal “cry for help” which isn’t healthy. Typically it is a way to let their social circle i.e. their family and friends know of their personal distress.

Another issue that has also been raised is the existence of underage accounts where children under 13 are operating a Facebook presence by lying about their age, But these accounts are only dealt with if a Facebook user draws attention to the existence of that account.

An advertising–driven platform

What was highlighted in the Four Corners telecast was that Facebook, like the other Silicon Valley social-media giants make most of their money out of on-site advertising. Here, the more engagement that end-users have with these social-media platforms, the more the advertising appears on the pages including the appearance of new ads which leads to more money made by the social media giant.

This is why some of the questionable content still exists on Facebook and similar platforms so as to increase engagement with these platforms. It is although most of us who use these platforms aren’t likely to actively seek this kind of content.

But this show hadn’t even touched on the concept of “brand safety” which is being raised in the advertising industry. This is the issue of where a brand’s image is likely to appear next to controversial content which could be seen as damaging to the brand’s reputation, and is a concept highly treasured by most consumer-facing brands maintaining the “friendly to family and business” image.

A very challenging task

Moderating staff will also find themselves in very mentally-challenging situations while they do this job because in a lot of cases, this kind of disturbing content can effectively play itself over and over again in their minds.

The hate speech quandary

The most contentious issue that Facebook, like the rest of the Social Web, is facing is hate speech. But what qualifies as hate speech and how obvious does it have to be before it has to be acted on? This broadcast drew attention initially to an Internet meme questioning “one’s (white) daughter falling in love with a black person” but doesn’t underscore an act of hatred. The factors that may be used as qualifiers may be the minority group, the role they are having in the accusation, the context of the message, along with the kind of pejorative terms used.

They are also underscoring the provision of a platform to host legitimate political debate. But Facebook can delete resources if a successful criminal action was taken against the publisher.

Facebook has a “shielded” content policy for highly-popular political pages, which is something similarly afforded to respected newspapers and government organisations; and such pages could be treated as if they are a “sacred cow”. Here, if there is an issue raised about the content, the complaint is taken to certain full-time content moderators employed directly by Facebook to determine what action should be taken.

A question that was raised in the context of hate speech was the successful criminal prosecution of alt-right activist Tommy Robinson for sub judice contempt of court in Leeds, UK. Here, he had used Facebook to make a live broadcast about a criminal trial in progress as part of his far-right agenda. But Twitter had taken down the offending content while Facebook didn’t act on the material. From further personal research on extant media coverage, he had committed a similar contempt-of-court offence in Canterbury, UK, thus underscoring a similar modus operandi.

A core comment that was raised about Facebook and the Social Web is that the more open the platform, the more likely one is to see inappropriate unpleasant socially-undesirable content on that platform.

But Facebook have been running a public-relations campaign regarding cleaning up its act in relation to the quality of content that exists on the platform. This is in response to the many inquiries it has been facing from governments regarding fake news, political interference, hate speech and other questionable content and practices.

Although Facebook is the common social-media platform in use, the issues draw out regarding the posting of inappropriate content also affect other social-media platforms and, to some extent, other open freely-accessible publishing platforms like YouTube. There is also the fact that these platforms can be used to link to content already hosted on other Websites like those facilitated by cheap or free Web-hosting services.

There may be some depression, suicide and related issues that I have covered in this article that may concern you or someone else using Facebook. Here are some numbers for relevant organisations in your area who may help you or the other person with these issues.

Australia

Lifeline

Phone: 13 11 14
http://lifeline.org.au

Beyond Blue

Phone: 1300 22 46 36
http://beyondblue.org.au

New Zealand

Lifeline

Phone: 0800 543 354
http://lifeline.org.nz

Depression Helpline

Phone: 0800 111 757
https://depression.org.nz/

United Kingdom

Samaritans

Phone: 116 123
http://www.samaritans.org

SANELine

Phone: 0300 304 7000
http://www.sane.org.uk/support

Eire (Ireland)

Samaritans

Phone: 1850 60 90 90
http://www.samaritans.org

USA

Kristin Brooks Hope Center

Phone: 1-800-SUICIDE
http://imalive.org

National Suicide Prevention Lifeline

Phone: 1-800-273-TALK
http://www.suicidepreventionlifeline.org/