Category: Fake News, Disinformation and Propaganda

Being cautious about fake news and misinformation in Australia

Previous Coverage

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Being aware of fake news in the UK

Fact-checking now part of the online media-aggregation function

Useful Australian-based resources

ABC Fact Check – ran in conjunction with RMIT University

Political Parties

Australian Labor Party (VIC, NSW)

Liberal Party – work as a coalition with National Party (VIC, NSW)

National Party – work as a coalition with Liberal Party (VIC, NSW)

Australian Greens – state branches link from main page

One Nation (Pauline Hanson)

Katter’s Australia Party

Derryn Hinch’s Justice Party

Australian Conservatives

Liberal Democratic Party

United Australia Party

My Comments

Over the next six months, Australia will see some very critical general elections come to pass both on a federal level and in the two most-highly-populated states that host most of that country’s economic and political activity. On October 30 2018, the election writs were recently served in the state of Victoria for its general election to take place on November 24 2018. Then, on the 23 March 2019, New South Wales will expect to go to the polls for its general election. Then the whole country will expect to go to the polls for the federal general election by 18 May 2019.

As these election cycles take place over a relatively short space of time and affecting , there is a high risk that Australians could fall victim to misinformation campaigns. This can subsequently lead to state and federal ballots being cast that steer the country against the grain like what happened in 2016 with the USA voting in Donald Trump as their President and the UK voting to leave the European Union.

Google News - desktop Web view

Look for tags within Google News that describe the context of the story

The issue of fake news and misinformation is being seen as increasingly relevant as we switch away from traditional media towards social media and our smartphones, tablets and computers for our daily news consumption.  This is thanks to the use of online search and news-aggregation services like Google News; or social media like Facebook or Twitter which can be seen by most of us as an “at-a-glance” view of the news.

As well, a significant number of well-known newsrooms are becoming smaller due to the reduced circulation and ratings for their newspaper or radio / TV broadcast thanks to the use of online resources for our news. It can subsequently lead to poor-quality news reporting and presentation with a calibre equivalent to the hourly news bulletin offered by a music-focused radio station. It also leads to various mastheads plagiarising content from other newsrooms that place more value on their reporting.

The availability of low-cost or free no-questions-asked Web and video hosting along with easy-to-use Web-authoring, desktop-publishing and desktop-video platforms make it feasible for most people to create a Web site or online video channel. It has led to an increased number of Websites and video channels that yield propaganda and information that is dressed up as news but with questionable accuracy.

Another factor that has recently been raised in the context of fake news, misinformation and propaganda is the creation and use of deepfake image and audio-visual content. This is where still images, audio or video clips that are in the digital domain are altered to show a falsehood using artificial-intelligence technology in order to convince viewers that they are dealing with original audio-visual resource. The audio content can be made to mimic an actual speaker’s voice and intonation as part of creating a deepfake soundbite or video clip.

It then becomes easy to place fake news, propaganda and misinformation onto easily-accessible Web hosts including YouTube in the case of videos. Then this content would be propagated around the Internet through the likes of Twitter, Facebook or online bulletin boards. It is more so if this content supports our beliefs and enhances the so-called “filter bubble” associated with our beliefs and media use.

There is also the fact that newsrooms without the resources to rigorously scrutinise incoming news could pick this kind of content up and publish or broadcast this content. This can also be magnified with media that engages in tabloid journalism that depends on sensationalism to get the readership or keep listeners and viewers from switching away.

The borderless nature of the Internet makes it easy to set up presence in one jurisdiction to target the citizens of another jurisdiction in a manner to avoid being caught by that jurisdiction’s election-oversight, broadcast-standards or advertising-standards authority. Along with that, a significant number of jurisdictions focus their political-advertising regulation towards the traditional media platforms even though we are making more use of online platforms.

Recently, the Australian Electoral Commission along with the Department of Home Affairs, Australian Federal Police and ASIO have taken action on an Electoral Integrity Assurance Task Force. It was in advance of recent federal byelections such as the Super Saturday byelections, where there was the risk of clandestine foreign interference taking place that could affect the integrity of those polls.

But the issue I am drawing attention to here is the use of social media or other online resources to run fake-news campaigns to sway the populace’s opinion for or against certain politicians. This is exacerbated by the use of under-resourced newsrooms that could get such material seen as credible in the public’s eyes.

But most of Silicon Valley’s online platforms are taking various steps to counter fake news, propaganda and disinformation using these following steps.

Firstly, they are turning off the money-supply tap by keeping their online advertising networks away from sites or apps that spread misinformation.

They also are engaging with various fact-check organisations to identify fake news that is doing the rounds and tuning their search and trending-articles algorithms to bury this kind of content.

Autocomplete list in Google Search Web user interface

Google users can report Autocomplete suggestions that they come across in their search-engine experience/

They are also maintaining a feedback loop with their end-users by allowing them to report fake-news entries in their home page or default view. This includes search results or autocomplete entries in Google’s search-engine user interface. This is facilitated through a “report this” option that is part of the service’s user interface or help pages.

Most of the social networks and online-advertising services are also implementing robust user-account-management and system-security protocols. This includes eliminating or suspending accounts that are used for misinformation. It also includes checking the authenticity of accounts running pages or advertising campaigns that are politically-targeted through methods like street-address verification.

In the case of political content, social networks and online-advertising networks are implementing easily-accessible archives of all political advertising or material that is being published including where the material is being targeted at.

ABC FactCheck – the ABC’s fact-checking resource that is part of their newsroom

Initially these efforts are taking place within the USA but Silicon Valley is rolling them out across the world at varying timeframes and with local adaptations.

Personally, I would still like to see a strong dialogue between the various Social Web, search, online-advertising and other online platforms; and the various government and non-government entities overseeing election and campaign integrity and allied issues. This can be about oversight and standards regarding political communications in the online space along with data security for each stakeholder.

What can you do?

Look for any information that qualifies the kind of story if you are viewing a collection of headlines like a search or news-aggregation site or app. Here you pay attention to tags or other metadata like “satire”, “fact checking” or “news” that describe the context of the story or other attributes.

Most search engines and news-aggregation Websites will show up this information in their desktop or mobile user interface and are being engineered to show a richer set of details. You may find that you have to do something extra like click a “more” icon or dwell on the heading to bring up this extra detail on some user interfaces.

Trust your gut reaction to that claim being shared around social media. You may realise that a claim associated with fake news may be out of touch with reality. Sensationalised or lurid headlines are a usual giveaway, along with missing information or copy that whips up immediate emotional responses from the reader.

Check the host Website or use a search engine like Google to see if the news sources you trust do cover that story. You may come across one or more tools that identify questionable news easily, typically in the form of a plug-in or extension that works with your browser if its functionality can be expanded with these kind of add-ons. It is something that is more established with browsers that run on regular Windows, Mac or Linux computers.

It is also a good idea to check for official press releases or similar material offered “from the horse’s mouth” by the candidates, political parties, government departments or similar organisations themselves. In some cases during elections, some of the candidates may run their own Web sites or they may run a Website that links from the political party’s Website. Here, you will find them on the Websites ran by these organisations and may indicate if you are dealing with a “beat-up” or exaggeration of the facts.

As you do your online research in to a topic, make sure that you are familiar with how the URLs are represented on your browser’s address bar for the various online resources that you visit. Here, be careful if a resource has more than is expected between the “.com”, “.gov.au” or similar domain-name ending and the first “/” leading to the actual online resource.

Kogan Internet table radio

Sometimes the good ol’ radio can be the trusted news source

You may have to rely on getting your news from one or more trusted sources. This would include the online presence offered by these sources. Or it may be about switching on the radio or telly for the news or visiting your local newsagent to get the latest newspaper.

Examples of these are: the ABC (Radio National, Local radio, News Radio, the main TV channel and News 24 TV channel), SBS TV, or the Fairfax newspapers. Some of the music radio stations that are part of a family run by a talk-radio network like the ABC with their ABC Classic FM or Triple J services will have their hourly newscast with news from that network. But be careful when dealing with tabloid journalism or commercial talkback radio because you may be exposed to unnecessary exaggeration or distortion of facts.

As well, use the social-network platform’s or search engine’s reporting functionality to draw attention to fake news, propaganda or misinformation that is being shared or highlighted on that online service. In some cases like reporting inappropriate autocomplete predictions to Google, you may have to use the platform’s help options to hunt for the necessary resources.

Here, as we Australians faces a run of general-election cycles that can be very tantalising for clandestine foreign interference, we have to be on our guard regarding fake news, propaganda and misinformation that could affect the polls.

Facebook clamps down on voter-suppression misinformation

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Facebook Extends Ban On Election Fakery To Include Lies About Voting Requirements | Gizmodo

From the horse’s mouth

Facebook

Expanding Our Policies on Voter Suppression (Press Release)

My Comments

Over recent years, misinformation and fake news has been used as a tool to attack the electoral process in order to steer the vote towards candidates or political parties preferred by powerful interests. This has been demonstrated through the UK Brexit referendum and the the USA Presidential Election in 2016 with out-of-character results emanating from the elections. It has therefore made us more sensitive to the power of misinformation and its use in influencing an election cycle, with most of us looking towards established news outlets for our political news.

Another attack on the electoral process in a democracy is the use of misinformation or intimidation to discourage people from registering on the electoral rolls including updating their electoral-roll details or turning up to vote. This underhand tactic is typically to prevent certain communities from casting votes that would sway the vote away from an area-preferred candidate.

Even Australia, with its compulsory voting and universal suffrage laws, isn’t immune from this kind of activity as demonstrated in the recent federal byelection for the Batman (now Cooper) electorate. Here, close to the election day, there was a robocall campaign targeted at older people north of the electorate who were likely to vote in an Australian Labour Party candidate rather than the area-preferred Greens candidate.

But this is a very common trick performed in the USA against minority, student or other voters to prevent them casting votes towards liberal candidates. This manifests in accusations about non-citizens casting votes or the same people casting votes in multiple electorates.

Facebook have taken further action against voter-suppression misinformation by including it in their remit against fake news and misinformation. This action has been taken as part of Silicon Valley’s efforts to work against fake news during the US midterm Congressional elections.

At the moment, this effort applies to information regarding exaggerated identification or procedural requirements concerning enrolment on the electoral rolls or casting your vote. It doesn’t yet apply to reports about conditions at the polling booths like opening hours, overcrowding or violence. Nor does this effort approach the distribution of other misinformation or propaganda to discourage enrolment and voting.

US-based Facebook end-users can use the reporting workflow to report voter-suppression posts to Facebook. This is through the use of an “Incorrect Voting Info” option that you select when reporting posted content to Facebook. Here, it will allow this kind of information to be verified by fact-checkers that are engaged by Facebook, with false content “buried” in the News Feed along with additional relevant content being supplied with the article when people discover it.

This is alongside a constant Facebook effort to detect and remove fake accounts existing on the Facebook platform along with increased political-content transparency across its advertising platforms.

As I have always said, the issue regarding misleading information that influences the election cycle can’t just be handled by social-media and advertising platforms themselves. These platforms need to work alongside the government-run electoral-oversight authorities and similar organisations that work on an international level to exchange the necessary intelligence to effectively identify and take action against electoral fraud and corruption.

Google to keep deep records of political ads served on their platforms

Articles

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote without undue influence?

Google Releases Political Ad Database and Trump Is the Big Winner | Gizmodo

From the horse’s mouth

Google

Introducing A New Transparency Report For Political Ads (Blog Post)

Transparency Report – Political Advertising On Google (Currently relevant to federal elections in the USA)

Advertising Policies Help Page – Political Advertising (Key details apply to USA Federal elections only)

My Comments

If you use YouTube as a free user or surf around the Internet to most ad-facilitated blogs and Websites like this one, you will find that the display ads hosted are provided by an ad network owned or managed by Google. Similarly, some free ad-funded mobile apps may be showing ads that are facilitated through Google’s ad networks. Similarly, some advertisers pay to have links to their online resources placed at the top of the Google search-results list.

Online ad - to be respected like advertising in printed media

Google to keep records of political ads that appear on these sites so they have the same kind of respect as traditional print ads

Over the past few years, there has been a strong conversation regarding the authenticity of political advertising on the online space thanks to the recent election-meddling and fake news scandals. This concern has been shown due to the fact that the online space easily transcends jurisdictional borders and isn’t as regulated as traditional broadcast, print and away-from-home advertising especially when it comes to political advertising.

Then there is also the fact that relatively-open publishing platforms can be used to present content of propaganda value as editorial-grade content. The discovery of this content can be facilitated through search engines and the Social Web whereupon the content can even be shared further.

Recently Facebook have taken action to require authentication of people and other entities behind ads hosted on their platforms and Pages or Public Profiles with high follower counts. This ins in conjunction to providing end-users access to archival information about ad campaigns ran on that platform. This is part of increased efforts by them and Google to gain control of political ads appearing on their platforms.

But Google have taken things further by requiring authentication and proof of legitimate residency in the USA for entities publishing political ads through Google-managed ad platforms that targeting American voters on a federal level. As well, they are keeping archival information about the political ads including the ads’ creatives, who sponsored the ad and how much is spent with Google on the campaign. They are even making available software “hooks” to this data for researchers, concerned citizens, political watchdog groups and the like to draw this data in to their IT systems for further research.

If you view a political ad in the USA on this site or other sites that use display advertising facilitated by Google, you will find out who is behind that ad if you click or tap on the blue arrow at the top right hand corner of that ad. Then you will see the disclosure details under the “Why This Ad” heading. Those of you who use YouTube can bring up this same information if you click or tap on the “i” (information) or three-dot icon while the ad is playing.

Google are intending to roll these requirements out for state-level and local-level campaigns within the USA as well as rolling out similar requirements with other countries and their sub-national jurisdictions. They also want to extend this vendor-based oversight towards issues-based political advertising which, in a lot of cases, makes up the bulk of that kind of advertising.

Personally I would also like to see Google and others who manage online ad platforms be able to “keep in the loop” with election-oversight authorities like the USA’s Federal Election Commission or the Australian Electoral Commission. Here, it can be used to identify inordinate political-donation and campaign-spending activity that political parties and others are engaging in.

How can social media keep itself socially sane?

BroadcastFacebook login page

Four Corners (ABC Australia) – Inside Facebook

iView – Click to view

Transcript

My Comments

I had just watched the Four Corners “Inside Facebook” episode on ABC TV Australia which touched on the issues and impact that Facebook was having concerning content that is made available on that platform. It was in relationship to recent questions concerning the Silicon Valley social-media and content-aggregation giants and what is their responsibility regarding content made available by their users.

I also saw the concepts that were raised in this episode coming to the fore over the past few weeks with the InfoWars conspiracy-theory site saga that was boiling over in the USA. There, concern was being raised about the vitriol that the InfoWars site was posting up especially in relationship to recent school shootings in that country. At the current time, podcast-content directories like Spotify and Apple iTunes were pulling podcasts generated by that site while

The telecast highlighted how the content moderation staff contracted by Facebook were handling questionable content like self-harm, bullying and hate speech.

For most of the time, Facebook took a content-moderation approach where the bare minimum action was required to deal with questionable content. This was because if they took a heavy-handed approach to censoring content that appeared on the platform, end-users would be drifting away from it. But recent scandals and issues like the Cambridge Analytica scandal and the allegations regarding fake news have been bringing Facebook on edge regarding this topic.

Drawing attention to and handling questionable content

At the moment, Facebook are outsourcing most of the content-moderation work to outside agencies and have been very secretive about how this is done. But the content-moderation workflow is achieved on a reactive basis in response to other Facebook users using the “report” function in the user-interface to draw their attention to questionable content.

This is very different to managing a small blog or forum which is something one person or a small number of people could do thanks to the small amount of traffic that these small Web presences could manage. Here, Facebook is having to engage these content-moderation agencies to be able to work at the large scale that they are working at.

The ability to report questionable content, especially abusive content, is compounded by a weak user-experience that is offered for reporting this kind of content. It is more so where Facebook is used on a user interface that is less than the full Web-based user experience such as some native mobile-platform apps.

This is because, in most democratic countries, social media unlike traditional broadcast media is not subject to government oversight and regulation. Nor is it subject to oversight by “press councils” like what would happen with traditional print media.

Handling content

When a moderator is faced with content that is identified as having graphic violence, they have the option to ignore the content – leave it as is on the platform, delete the content – remove it from the platform, or mark as disturbing – the content is subject to restrictions regarding who can see the content and how it is presented including a warning notice that requires the user to click on the notice before the content is shown. As well, they can notify the publisher who put up the content about the content and the action that has been done with it. In some cases, the content being “marked as disturbing” may be a method used to raise common awareness about the situation being portrayed in the content.

They also touched on dealing with visual content depicting child abuse. One of the factors raised is that the the more views that content depicting abuse multiplies the abuse factor against the victim of that incident.

As well, child-abuse content isn’t readily reported to law-enforcement authorities unless it is streamed live using Facebook’s live-video streaming function. This is because the video clip could be put up by someone at a prior time and on-shared by someone else or it could be a link to content already hosted somewhere else online. But Facebook and their content-moderating agencies engages child-safety experts as part of their moderating team to determine whether it should be reported to law enforcement (and which jurisdiction should handle it).

When facing content that depicts suicide, self-harm or similar situations, the moderating agencies treat these as high-priority situations. Here, if the content promotes this kind of self-destructive behaviour, it is deleted. On the other hand, other material is flagged as to show a “checkpoint” on the publisher’s Facebook user interface. This is where the user is invited to take advantage of mental-health resources local to them and are particular to their situation.

But it is a situation where the desperate Facebook user is posting this kind of content as a personal “cry for help” which isn’t healthy. Typically it is a way to let their social circle i.e. their family and friends know of their personal distress.

Another issue that has also been raised is the existence of underage accounts where children under 13 are operating a Facebook presence by lying about their age, But these accounts are only dealt with if a Facebook user draws attention to the existence of that account.

An advertising–driven platform

What was highlighted in the Four Corners telecast was that Facebook, like the other Silicon Valley social-media giants make most of their money out of on-site advertising. Here, the more engagement that end-users have with these social-media platforms, the more the advertising appears on the pages including the appearance of new ads which leads to more money made by the social media giant.

This is why some of the questionable content still exists on Facebook and similar platforms so as to increase engagement with these platforms. It is although most of us who use these platforms aren’t likely to actively seek this kind of content.

But this show hadn’t even touched on the concept of “brand safety” which is being raised in the advertising industry. This is the issue of where a brand’s image is likely to appear next to controversial content which could be seen as damaging to the brand’s reputation, and is a concept highly treasured by most consumer-facing brands maintaining the “friendly to family and business” image.

A very challenging task

Moderating staff will also find themselves in very mentally-challenging situations while they do this job because in a lot of cases, this kind of disturbing content can effectively play itself over and over again in their minds.

The hate speech quandary

The most contentious issue that Facebook, like the rest of the Social Web, is facing is hate speech. But what qualifies as hate speech and how obvious does it have to be before it has to be acted on? This broadcast drew attention initially to an Internet meme questioning “one’s (white) daughter falling in love with a black person” but doesn’t underscore an act of hatred. The factors that may be used as qualifiers may be the minority group, the role they are having in the accusation, the context of the message, along with the kind of pejorative terms used.

They are also underscoring the provision of a platform to host legitimate political debate. But Facebook can delete resources if a successful criminal action was taken against the publisher.

Facebook has a “shielded” content policy for highly-popular political pages, which is something similarly afforded to respected newspapers and government organisations; and such pages could be treated as if they are a “sacred cow”. Here, if there is an issue raised about the content, the complaint is taken to certain full-time content moderators employed directly by Facebook to determine what action should be taken.

A question that was raised in the context of hate speech was the successful criminal prosecution of alt-right activist Tommy Robinson for sub judice contempt of court in Leeds, UK. Here, he had used Facebook to make a live broadcast about a criminal trial in progress as part of his far-right agenda. But Twitter had taken down the offending content while Facebook didn’t act on the material. From further personal research on extant media coverage, he had committed a similar contempt-of-court offence in Canterbury, UK, thus underscoring a similar modus operandi.

A core comment that was raised about Facebook and the Social Web is that the more open the platform, the more likely one is to see inappropriate unpleasant socially-undesirable content on that platform.

But Facebook have been running a public-relations campaign regarding cleaning up its act in relation to the quality of content that exists on the platform. This is in response to the many inquiries it has been facing from governments regarding fake news, political interference, hate speech and other questionable content and practices.

Although Facebook is the common social-media platform in use, the issues draw out regarding the posting of inappropriate content also affect other social-media platforms and, to some extent, other open freely-accessible publishing platforms like YouTube. There is also the fact that these platforms can be used to link to content already hosted on other Websites like those facilitated by cheap or free Web-hosting services.

There may be some depression, suicide and related issues that I have covered in this article that may concern you or someone else using Facebook. Here are some numbers for relevant organisations in your area who may help you or the other person with these issues.

Australia

Lifeline

Phone: 13 11 14
http://lifeline.org.au

Beyond Blue

Phone: 1300 22 46 36
http://beyondblue.org.au

New Zealand

Lifeline

Phone: 0800 543 354
http://lifeline.org.nz

Depression Helpline

Phone: 0800 111 757
https://depression.org.nz/

United Kingdom

Samaritans

Phone: 116 123
http://www.samaritans.org

SANELine

Phone: 0300 304 7000
http://www.sane.org.uk/support

Eire (Ireland)

Samaritans

Phone: 1850 60 90 90
http://www.samaritans.org

USA

Kristin Brooks Hope Center

Phone: 1-800-SUICIDE
http://imalive.org

National Suicide Prevention Lifeline

Phone: 1-800-273-TALK
http://www.suicidepreventionlifeline.org/

Google and Facebook are starting to bring accountability to political advertising

Articles

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote without undue influence? (Courtesy of Australian Electoral Commission)

Facebook announces major changes to political ad policies | NBC News

Facebook reveals new political ad policies in wake of U.S. election | VentureBeat

What Can and Can’t You Do with Political Advertising on Facebook? | Spatially

Google Joins Facebook In Banning All Ads Related To Ireland’s Big Abortion Vote | Gizmodo

From the horse’s mouth

Facebook

Update on Our Advertising Transparency and Authenticity Efforts {Press Release)

Facebook will not be accepting referendum related ads from advertisers based outside of Ireland {Press Release)

Google

Supporting election integrity through greater advertising transparency (Blog Post)

My Comments

Over the last five months, a strong conversation has risen surrounding electioneering and political advertising on the online platforms including social media and online advertising.

The trends concerning this activity is that the political advertising spend is moving away from traditional print and broadcast media towards online media as we make more use of highly-portable computing devices to consume our information and entertainment.

Issues that have also been raised include the use of fake comments and pre-programmed auto-responding “bots” as part of political campaigns. This is alongside the rise of very divisive political campaigns during the 2016 Brexit and US Presidential election cycles that played on racial and religious prejudices. There is also the fact that nation states with improper intentions are seeing the idea of poisoning the information flow as another weapon in their cyber-warfare arsenal.

It has also been facilitated through the use of highly-focused data-driven campaign-targeting techniques based on factors like race, gender, location and interests, with this practice being highlighted in the Cambridge Analytica saga that caught up Facebook and Twitter.

As well, the online advertising and social media platforms have made it easy to create and maintain an advertising or editorial campaign that transcends jurisdictional borders. This is compared to traditional media that would be dependent on having the advertising material pass muster with the media outlet’s advertising staff in the outlet’s market before it hits the presses or the airwaves.

This issue will become more real with the use of addressable TV advertising which is currently practised with some advertising-based video-on-demand services and some cable-TV platforms but will become the norm with traditional linear TV being delivered through through the increasing use of interactive-TV platforms.

This technology would facilitate “hyper-targeting” of political campaigns such as municipal-level or postcode/ZIP-code targeting yet maintain the same “air of legitimacy” that the traditional TV experience provides, making it feasible to destabilise elections and civil discourse on the local-government level.

Election-oversight authorities in the various jurisdictions like the Australian Electoral Commission or the UK’s Electoral Commission have been doing battle with the online trend because most of the legislation and regulation surrounding political and election activities has been “set in stone” before the rise of the Internet. For example, in most jurisdictions, you will see or hear a disclosure tag after a political advertisement stating which organisation or individual was behind that ad. Or there will be financial reporting and auditing requirements for the election campaigns that take place before the polls.

Facebook and Google are having to face these realities through the use of updated advertising-platform policies which govern political advertising, But Facebook applies this to candidate-based campaigns and issues-based campaigns while Google applies this to candidate-based campaigns only at the time of writing.

Firstly there is a prohibition on political advertising from entities foreign to the jurisdiction that the ad is targeted for. This is in line with legislation and regulation implemented by most jurisdictions proscribing foreign donations to political campaigns affecting that jurisdiction.

This is augmented through a requirement for political advertisers to furnish proof of identity and residence in the targeted jurisdiction. In the case of Facebook, they apply this policy to pages and profiles with very large followings as well as ads. Similarly, they implement a postcard-based proof-of-residence procedure where they send a postcard by snail mail to the user’s US-based home / business address to very presence in the USA.

Facebook augments this requirement by using artificial-intelligence to flag if an ad is political or not, so they can make sure that the advertiser is complying with the requirements for political advertising on this platform.

Like with traditional media, political ads on both these platforms will be required to have a disclosure tag. But Facebook goes further by making this a hyperlink that end-users can click on to see details like verification documents, why the viewer saw the ad along with a link to the sponsoring organisation’s Facebook Page. This has more utility than the slide shown at the end of a TV or online ad, the voice-announcement at the end of a radio ad or small text at the bottom of a print-media ad or billboard poster which most of these tags represent.

Both of the Internet titans will also make sure details about these campaigns are available and transparent to end-users so they know what is going on. For example, Facebook requires advertisers to maintain a Facebook Page before they buy advertising on any of the Facebook-owned platforms. This will have a “View Ads” tab which includes details about targeting of each current and prior campaign with a four-year archive allowance.

Google has taken things further by making sure that political organisations, politicians, the media and journalists are aware of the resources they have to assure data security for their campaigns and other efforts. Here, they have prepared a “Protect Your Election” Webpage that highlights the resources that they provide that are relevant for each kind of player in a political campaign. This includes Project Shield to protect Websites against distributed denial-of-service attacks, along with enhanced security measures available to operators of Google Accounts associated with critical data.

Both companies have been implementing these procedures for North America with Facebook trying them out in Canada then “cementing” them in to the USA before the midterm Congress election cycle there. Both companies then took action to suspend political ads from foreign entities outside Ireland during the election cycle for the Eighth Amendment abortion referendum taking place in that country. Here, they have applied the prohibition until the close of polls on May 25 2018. Let’s not forget that these standards will be gradually rolled out in to other jurisdictions over time.

But what I would like to see is for companies who run online advertising and social-media activity to liaise strongly with election-oversight officials in the various jurisdictions especially if it affects a currently-running poll or one that is to take place in the near future. This is in order to advise these officials of any irregularities that are taking place with political advertising on their online platforms or for the officials to notify them about issues or threats that can manifest through the advertising process.

 

YouTube to share fact-checking resources when showing conspiracy videos

Articles

YouTube will use Wikipedia to fact-check internet hoaxes | FastCompany

YouTube plans ‘information cues’ to combat hoaxes | Engadget

YouTube will add Wikipedia snippets to conspiracy videos | CNet

My Comments

YouTube home page

YouTube – now also being used to distribute conspiracy theory material

Ever since the mid-1990s when the Internet became common, conspiracy theorists used various Internet resources to push their weird and questionable “facts” upon the public. One of the methods being used of late is to “knock together” videos and run these as a “channel” on the common video sharing platform that is YouTube.

But Google who owns YouTube now announced at the SXSW “geek-fest” in Austin, Texas that they will be providing visual cues on the YouTube interface to highlight fact-checking resources like Wikipedia, everyone’s favourite “argument-settling” online encyclopaedia. These will appear when known conspiracy-theory videos or channels are playing and most likely will accompany the Web-based “regular-computer” experience or the mobile experience.

Wikipedia desktop home page

Wikipedia no being seen as a tool for providing clear facts

It is part of an effort that Silicon Valley is undertaking to combat the spread of fake news and misinformation, something that has become stronger over the last few years due to Facebook and co being used as a vector for spreading this kind of information. Infact, Google’s “news-aggregation” services like Google News (news.google.com) implements tagging of resources that come up regarding a news event and they even call out “fact-check” resources as a way to help with debunking fake news.

Australian government to investigate the role of Silicon Valley in news and current affairs

Articles

Facebook login page

Facebook as a social-media-based news aggregator

Why the ACCC is investigating Facebook and Google’s impact on Australia’s news media | ABC News (Australia)

ACCC targets tech platforms | InnovationAus.com

World watching ACCC inquiry into dominant tech platforms | The Australian (subscription required)

Australia: News and digital platforms inquiry | Advanced Television

My Comments

A question that is being raised this year is the impact that the big technology companies in Silicon Valley, especially Google and Facebook, are having on the global media landscape. This is more so in relationship to established public, private and community media outlets along with the sustainability for these providers to create high-quality news and journalistic content especially in the public-affairs arena.

Google News - desktop Web view

Google News portal

It is being brought about due to the fact that most of us are consuming our news and public-affairs content on our computers, tablets and smartphones aided and abetted through the likes of Google News or Facebook. This can extend to things like use of a Web portal or “news-flash” functionality on a voice-driven assistant.

This week, the Australian Competition and Consumer Commission have commenced an inquiry into Google and Facebook in regards to their impact on Australian news media. Here, it is assessing whether there is real sustainable competition in the media and advertising sectors.

Google Home and similar voice-driven home assistants becoming another part of the media landscape

There is also the kind of effect Silicon Valley is having on media as far as consumers (end-users), advertisers, media providers and content creators are concerned. It also should extend to how this affects civil society and public discourse.

It has been brought about in response to the Nick Xenophon Team placing the inquiry as a condition of their support for the passage of Malcolm Turnbull’s media reforms through the Australian Federal Parliament.

A US-based government-relations expert saw this inquiry as offering a global benchmark regarding how to deal with the power that Silicon Valley has over media and public opinion with a desire for greater transparency between traditional media and the big tech companies.

Toni Bush, executive vice president and global head of government affairs, News Corporation (one of the major traditional-media powerhouses of the world) offered this quote:

“From the EU to India and beyond, concerns are rising about the power and reach of the dominant tech platforms, and they are finally being scrutinised like never before,”

What are the big issues being raised in this inquiry?

One of these is the way Google and Facebook are offering news and information services effectively as information aggregators, This is either in the form of providing search services with Google ending up as a generic trademark for searching for information on the Internet; or social-media sharing in the case of Facebook. Alongside this is the provisioning of online advertising services and platforms for online media providers both large and small. This is infact driven by data which is being seen as the “new oil” of the economy.

A key issue often raised is how both these companies and, to some extent, other Silicon Valley powerhouses are changing the terms of engagement with content providers without prior warning. This is often in the form of a constantly-changing search algorithm or News Feed algorithm; or writing the logic behind various features like Google Accelerated Mobile Pages or Facebook Instant Articles to point the user experience to resources under their direct control rather than the resources under the control of the publisher or content provider. These issues relate to the end user having access to the publisher’s desktop or mobile user experience which conveys that publisher’s branding or provides engagement and monetisation opportunities for the publisher such as subscriptions, advertising or online shopfronts..

This leads to online advertising which is very much the direction of a significant part of most businesses’ advertising budgets. What is being realised is that Google has a strong hand in most of the online search, display and video advertising, whether through operating commonly-used ad networks like Adsense,  Adwords or the Google Display Network; or through providing ad management technology and algorithms to ad networks, advertisers and publishers.

In this case, there are issues relating to ad visibility, end-user experience, brand safety, and effective control over content.

This extends to what is needed to allow a media operator to sustainably continue to provide quality content. It is irrespective of whether they are large or small or operating as a public, private or community effort.

Personally I would like to see it extend to small-time operators such as what represents the blogosphere including podcasters and “YouTubers” being able to create content in a sustainable manner and able to “surface above the water”. This can also include whether traditional media could use material from these sources and attribute and renumerate their authors properly, such as a radio broadcaster syndicating a highly-relevant podcast or a newspaper or magazine engaging a blogger as a freelance columnist.

Other issues that need to be highlighted

I have covered on this site the kind of political influence that can be wielded through online media, advertising and similar services. It is more so where the use of these platforms in the political context is effectively unregulated territory and can happen across different jurisdictions.

One of these issues was use of online advertising platforms to run political advertising during elections or referendums. This can extend to campaign material being posted as editorial content on online resources at the behest of political parties and pressure groups.

Here, most jurisdictions want to maintain oversight of these activity under the context of overseeing political content that could adversely influence an election and the municipal government in Seattle, Washington want to regulate this issue regarding local elections. This can range from issues like attribution of comments and statements in advertising or editorial material through the amount of time the candidates have to reach the electorate to mandatory blackouts or “cooling-off” periods for political advertising before the jurisdiction actually goes to the polls.

Another issue is the politicisation of responses when politically-sensitive questions are being posed to a search engine or a voice-driven assistant of the Amazon Alexa, Apple Siri or Google Assistant kind. Here, the issue with these artificial-intelligence setups is that they could be set up to provide biased answers according to the political agenda that the company behind the search engine, voice-driven assistant or similar service is behind.

Similarly, the issue of online search and social-media services being used to propagate “fake news” or propaganda disguised as news is something that will have to be raised by governments. It has become a key talking point over the past two years in relationship with the British Brexit referendum, the 2016 US Presidential election and other recent general elections in Europe. Here, the question that could be raised is whether Google and Facebook are effectively being “judge, jury and executioner” through their measures  or whether traditional media is able to counter the effective influence of fake news.

Conclusion

What is happening this year is that the issue of how Silicon Valley and its Big Data efforts is able to skew the kind of news and information we get. It also includes whether the Silicon Valley companies need to be seen as another influential media company and what kind if regulation is needed in this scenario.

Politics creeps in to the world of the voice-driven assistant

Articles

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Is Amazon Alexa and similar voice-driven assistants becoming a new point of political influence in our lives?

Amazon’s Alexa under fire for voicing gender and racial views | The Times via The Australian

Alexa, are you a liberal? Users accuse Amazon’s smart assistant of having a political bias after she reveals she is a feminist who supports Black Lives Matter | Daily Mail

Amazon’s Alexa is a feminist and supports Black Lives Matter | Salon

My Comments

An issue that has started to come on board lately is how Amazon Alexa, Google Assistant, Apple Siri and Microsoft Cortana respond to highly-polarising political questions especially in context to hot-button topics.

This talking point has come up just lately in the USA which has over the last year become highly polarised. It has been driven by the rise of the alt-right who have been using social media to spread their vitriol, the fake news scandals, along with Donald Trump’s rise to the White House. Even people from other countries who meet up with Americans or have dealings with any organisation that has strong American bloodlines may experience this.

Could this even apply to Apple’s Siri assistant or Google Assistant that you have in your smartphone?

What had been discovered was that Amazon’s Alexa voice-driven assistant was being programmed to give progressive-tinted answers to issues seen to be controversial in the USA like feminism, Black Lives Matter, LGBTI rights, etc. This was causing various levels of angst amongst the alt-right who were worried about the Silicon-Valley / West-Coast influence on the social media and tech-based information resources.

But this has not played off with the UK’s hot-button topics with Alexa taking a neutral stance on questions regarding Brexit, Jeremy Corbyn, Theresa May and similar lssues. She was even challenged about what a “Corbynista” (someone who defends Jeremy Corbyn and his policies) is. This is due to not enough talent being available in the UK or Europe to program Alexa to achieve answers  to UK hot topics in a manner that pleases Silicon Valley.

The key issue here is that voice-driven assistants can be and are being programmed to answer politically-testing questions in a hyper-polarised manner. How can this be done?

Could it also apply to Cortana on your Windows 10 computer?

The baseline approach, taken by Apple, Google and Microsoft, can be to give the assistant access to these resources that matches the software company’s or industry’s politics. This can be pointing to a full-tier or meta-tier search engine that ranks favourably resources aligned to the desired beliefs. It can also be about pointing also to non-search-engine resources like media sites that run news with that preferred slant.

The advanced approach would be for a company with enough programming staff and knowledge on board could programmatically control that assistant to give particular responses in response to particular questions. This could be to create responses worded in a way to effectively “preach” the desired agenda to the user. This method is infact how Amazon is training Alexa to respond to those topics that are seen as hot-button issues in the USA.

Government regulators in various jurisdictions may start to raise questions regarding how Alexa and co are programmed and their influence on society. This is with a view to seeing search engines, social media, voice-driven assistants and the like as media companies similar to newspaper publishers or radio / TV broadcasters and other traditional media outlets, with a similar kind of regulatory oversight. It is more so where a voice-driven assistant is baked in to hardware like a smart speaker or software like an operating system to work as the only option available to users for this purpose, or one or more of these voice-driven assistants benefits from market dominance.

At the moment, there is nothing you can really do about this issue except to be aware of it and see it as something that can happen when a company or a cartel of companies who have clout in the consumer IT industry are given the power to influence society.

Being aware of fake news in the UK

Previous HomeNetworking01.info coverage on this topic UK Flag

Silicon Valley Starts A War Against Fake News

Fact Checking Now Part Of The Online Media Aggregation Function

Useful UK-focused resources

FullFact.org (UK independent factchecking charity)

BBC Reality Check

Channel 4 News FactCheck

Political Parties

A few of the main political parties to watch in the UK

Conservatives (Tories)

Labour

Liberal Democrats

Green Party

UK Independence Party

Scottish National Party

Plaid Cymru (Party Of Wales)

Ulster Unionist Party

Sinn Fein

My Comments and advice

A key issue that is affecting how newsworthy events are covered and what people should become aware of in the news is the rise of propaganda, satire and similar information disguised as news. This situation is being described as “fake news”, “post-truth” and “alternative facts” and a significant number of academics have described it as a reason why Donald Trump became President of the USA or why the British citizens wanted the UK to leave the European Union.

I am giving some space in HomeNetworking01.info to the fake-news topic because an increasing number of people are obtaining their daily news from online sources using a smartphone, tablet or computer. This may be in addition to the traditional papers or the radio or TV newscasts and current-affairs shows or in lieu of these resources.

There have been many factors that have led to a fertile ground for fake news to spread. One of these is that most of us are using online search / aggregation services and social media as our news sources. Similarly, due to reduced circulation or ratings, various well-known news publishers and broadcasters are cutting back on their news budgets which then reduce the number of journalists in the newsroom or reduce news coverage to a quality not dissimilar to a news bulletin offered by a music-focused radio station.

Add to this the fact that it is relatively cheap and easy to set up a Website that looks very enticing thanks to low-cost “no-questions-asked” Web-host services and easy-to-use content management systems. It has led to the rise of Websites that carry propaganda or other material dressed up as news with this material being of questionable accuracy or value. Let’s not forget that it is easy to use Twitter or Facebook to share articles with our friends or followers especially if these articles support our beliefs.

Autocomplete list in Google Search Web user interface

Google users can report Autocomplete suggestions that they come across in their search-engine experience/

It is also made worse by the cross-border nature of the Internet where one can set up a Website or social-media presence in one country to target citizens in another country with questionable messages. This makes it easier to run the propaganda but avoid being caught out by a broadcast-standards or election-oversight authority or the judicial system in the target jurisdiction.

The fact that the UK are going to the polls for a general election this year means that Britons will become more vulnerable to the fake-news phenomenon. This is a situation that is also affecting France and Germany, two of continental Europe’s major economic, political and population centres who either are in the throes of completing a general election.

Reporting autocomplete suggestions in Google Search Web user experience

What you see when you report autocomplete suggestions in the Google Search Web user experience

The Chairman of the Culture, Media and Sport Committee, Damian Collins (Conservatives), has raised this issue concerning Facebook and urging them to filter out fake news. This is although Silicon Valley have been taking steps to combat this problem through the following actions:

  • “turn off the money-supply tap” by refusing to partner their ad networks with fake-news sites or apps
  • engage with fact-checking organisations and departments that are either part of established newsrooms or universities to simplify the ability for their users to check the veracity of a claim
  • implementing a feedback loop to allow users to report auto-complete search suggestions, “snippets” answers, social-media posts and similar material shown in their sites, including the ability to report items as fake news
  • maintaining stronger user-account management and system security including eliminating accounts used just to deliver fake news and propaganda
  • modifying search-engine ranking algorithm or “trending-stories” listing algorithms to make it harder for fake news to surface.

What can you do?

Look for information that qualifies the kind of story if you are viewing a collection of headlines like a search or news-aggregation site or app. For example, Google has implemented tagging in their Google News aggregation site and apps such as “satire”.

Trust your gut reaction to that claim that is being offered in that Facebook post before you share it. If you find that the story sounds like exaggeration or is “off the beam”, it sounds like fake news. As well, the copy in many fake-news articles is written in a way to whip up anger or other immediate sentiment.

Check the host Website or use a search engine to see if trusted sources, especially the ones you trust, are covering the story. As well, if your browser offers a plug-in or extension that highlights fake-news and questionable content, it may be worth adding this feature.

Following news from one or more trusted news sources (including their online presence) may be the way to go to verify news being pushed around on the Internet.

For example, switching on the radio or the telly for the news may be a good idea so as to be sure of what really is going on with this election. In the case of the radio, you may find that BBC Radio 4, BBC Local Radio or a talk-focused independent station like LBC may be the better resource for deeper coverage of the election. Music stations who are part of the same family as a news or talk station such as the BBC stations or Capital, Heart and Classic FM who are part of the same family as LBC can also be of value if you use their short news bulletins as a news source. This is because their news bulletins are fed by the newsroom that serves the talk station.

As well, visit the online sites offered by trusted publishers and broadcasters to check the news in relationship to what the parties are saying. It also includes heading to Websites operated by the various parties or candidates so you can get the facts and policies “from the horse’s mouth”.

You also must take advantage of the feedback loop that Facebook, Google and other online services offer to call out questionable content that appears during the election period. Typically this will be options to report the content or autocomplete hit as something like being inappropriate.

Fact-checking now part of the online media-aggregation function

Article – From the horse’s mouth

Google

Expanding Fact Checking at Google (Blog Post)

My Comments

ABC FactCheck – the ABC’s fact-checking resource that is part of their newsroom

Previously, we got our news through newspapers, magazines and radio / TV broadcasters who had invested significant amounts of money or time in journalism efforts. Now the Internet has reduced the friction associated with publishing content – you could set up an easily-viewable Website for very little time and cost and pump it with whatever written, pictorial, audio or video content you can.

Google News – one of the way we are reading our news nowadays

This has allowed for an increase in the amount of news content that is of questionable accuracy and value to be easily made available to people. It is exaggerated by online services such as search and aggregation services of the Google or Buzzfeed ilk and social media of the Facebook ilk being a major “go-to” point, if not the “go-to” point for our news-reading. In some cases, it is thanks to these services using “virtual newspaper” views and “trending-topic” lists to make it easy for one to see what has hit the news.

As well, with traditional media reducing their newsroom budgets which leads to reduction in the number of journalists in a newsroom, it gets to the point where content from online news-aggregation services ends up in the newspapers or traditional media’s online presence.

The fact that news of questionable accuracy or value is creeping in to our conversation space with some saying that it has affected elections and referenda is bringing forward new concepts like “post-truth”, “alternative facts” and “fake news” with these terms part of the lexicon. What is being done about it to allow people to be sure they are getting proper information?

Lately, a few news publishers and broadcasters have instigated “fact-checking” organisations or departments where they verify the authenticity of claims and facts that are coming in to their newsrooms. This has led to stories acquiring “Fact-check” or “Truth-meter” gauges along with copy regarding the veracity of these claims. In some cases, these are also appearing on dedicated Web pages that the news publisher runs.

In a lot of cases, such as Australia’s ABC, these “fact-checking” departments work in concert with another standalone organisation like a university, a government’s election-oversight department or a public-policy organisation. This partnership effectively “sharpens the fact-checking department’s knives” so they can do their job better.

But the question that is facing us is how are we sure that the news item we are about to click on in Google or share in Facebook is kosher or not. Google have taken this further by integrating the results from fact-check organisations in articles listed in the Google News Website or Google News & Weather iOS / Android mobile news apps and calling these “fact-check” results out with a tag. The same feature is also being used on the News search tab when you search for a particular topic. Initially this feature was rolled out in to the US and UK markets but is slowly being rolled out in to other markets like France, Germany, Brazil and Mexico.

Google is also underpinning various fact-check efforts through helping publishers build up their efforts or instigating event-specific efforts like the CrossCheck effort involving 20 French newsrooms thanks to the French presidential election. It is in addition to supporting the First Draft Coalition who helps with assuring the integrity of the news being put up on the Internet. It also includes the use of the Digital Initiative Fund to help newsrooms and others instigate or improve their fact-checking operations.

A question that will also be raised is how to identify the political bias of a particular media outlet and convey that in a search engine. This is something that has been undertaken by the Media Bias / Fact Check Website which is an independently-run source that assesses media coverage of US issues and how biased the media outlet is.

But a situation that needs to appear is the ability for fact-check organisations who implement those “accuracy gauges” to share these metrics as machine-useable metadata that can be interpreted through the rich search interfaces that Google and their ilk provide. Similarly, the provision of this metadata and its interpretation by other search engines or social-media sites can provide a similar advantage. But it would require the use of “news categorisation” metadata relating to news events, locations and the actors who are part of them to make this more feasible.

Similarly, a social network like Facebook could use the fact-checking resources out there to identify where fake news is being spread so that users can be certain if that link they intend to share is questionable or not.

To the same extent, engaging government election-oversight departments like the Australian Electoral Commission, the Federal Election Commission in the USA and the Electoral Commission in the UK in the fact-checking fabric can help with assuring that there are proper and fair elections.  This is more so as these departments perform a strong role in overseeing the campaigns that take place in the lead up to an election and they could use the fact-checking organisations to identify where campaigns are being run with questionable information or in an improper manner.

As part of our research in to a news topic, we could be seeing the fact-checking resources playing an important role in sorting the facts from the spin and conspiracy nonsense.