Tag: Facebook

Big Tech works with the Linux Foundation to compete with Google Maps for geospatial information

Articles

OpenStreetMap seen as a viable alternative to Google Maps

Big Tech Companies Join Linux in Effort to Kill Google Maps (gizmodo.com)

There could finally be a solid Google Maps alternative on Android – SamMobile

From the horse’s mouth

Linux Foundation Project

Overture Maps Foundation – Linux Foundation Project

My Comments

Major tech firms like Microsoft, Meta (Facebook, Instagram), TomTom, Amazon Web Services and the Linux Foundation to build an open-source mapping and geolocation project to compete with Google Maps. It is to complement OpenStreetMap as a major competing navigation and geospatial data pool.

As well, they are pulling in data from public sources like government urban-planning departments to create the “shape” of cities and towns. Here, this allows for factoring in new property developments that are given the green light along with government-planned urban-renewal and similar projects. It could also encompass government roads departments who are laying down new roads or upgrading existing roads for new needs.

The idea is to support true interoperability when it comes to information about places and areas. Here, it is about using data from a plurality of data sources which leads to better data quality and richer data.

An issue that I would see coming about is whether the Overture Maps Foundation project and OpenStreetMap will present this effort as a consumer-facing mobile platform app or desktop program pitched for general use like HEREWeGo Maps. Or whether it could be focused towards various third-party Websites and software that exploits this data such as e-government, vehicle-dispatch, hotel-booking or similar use cases.

But one area this could affect is your vehicle’s integrated GPS sat-nav feature, especially if a vehicle is intended to be positioned for the so-called “value-price” market. The combination of the Overture Project and OpenStreetMap could be about providing a line-fit sat-nav setup at a price that is affordable to the manufacturer. It could also be about automotive infotainment equipment sold as an aftermarket add-on that has sat-nav functionality where such equipment is to be sold at a price affordable for most people.

Similarly, there will be issues like assuring support for and access to real-time data such as weather, traffic and transit, or emergency-situation information. This could be facilitated through open-frame database APIs associated with weather services and the like who maintain this kind of data, something that could be pushed by the public service achieving the “open source” attitude.

Facebook now offers a way to turn off political ads on its main platforms

Article Facebook login page

Don’t want political ads in your Facebook or Instagram feed? You’ll be able to turn that off | CNet

From the horse’s mouth

Facebook

Launching The Largest Voting Information Effort in US History (Press Release)

Videos

Control Political Ad Content on Facebook (Click or tap to play)

Control Political Ad Content on Instagram (Click or tap to play)

My Comments

Facebook is introducing a feature that allows its users to effectively “mute” political advertising including issues-driven advertising on their main social-Web platform as well as Instagram.

This feature will be available to USA-based accounts as part of Facebook’s voter-information features for the 2020 Presidential Elections. That includes information on how and where to register along with where and when to vote, including early-voting (pre-poll voting) and postal-voting information. It underscores Facebook’s role as part of Silicon Valley’s effort to “get out the vote” in the USA.

Personally I am not sure whether this setup will provide information relevant to American expats who have moved to other countries like how their local US embassy or consulate is facilitating their vote. It is because in most cases these expats will still have voting rights of some sort for US elections.

The option will be available in the “Ad Preferences” option for your platform’s user-account settings on both Facebook and Instagram. Or both platforms will have a contextual option, highlighted under a stylised “i”, available for political ads allowing you to see fewer ads of this type, This can be set up using your Web-based user experience or the official native mobile-platform apps that you use for working these platforms with.

Of course, there won’t be the ability to regulate editorial content from media organisations that is posted or shared through Facebook or Instagram. This will be an issue when you deal with media outlets that have a highly-partisan editorial policy. Nor will there be the ability to control posts, shares and comments from Pages and Profiles that aren’t shared as a paid advertisement.

There may also be questions about whether your favourite politician’s, political party’s or civic-society organiation’s Facebook or Instagram traffic will appear in your platform’s main view especially if they pay to increase viewership of these posts. It can be of concern for those of us who have a strong role in political and civic society and see the Facebook traffic as a “news-ticker” for the political entities we engage with.

Facebook has an intent to roll this feature out to other countries where they have established systems for managing and monitoring political advertising on their platforms. At least they are the first online ad platform that allows users to have control over the political and issue advertising that they see while they use that platform.

Keeping the same character within your online community

Article

Facebook login page

Online communities do represent a lot of hard work and continuous effort including having many moderators

General Election 2019: Has your local Facebook group been hijacked by politics? | BBC News

My Comments

The past UK General Election highlighted an issue with the management of online communities, especially those that are targeted at neighbourhoods.

In the BBC News article, a local Facebook group that was used by a neighbourhood specifically for sharing advice, recommending businesses, advertising local events, “lost-and-found” and similar purposes was steered from this purpose to a political discussion board.

You may or may not think that politics should have something to do with your neighbourhood but ordinarily, it stays very well clear. That is unless you are dealing with a locally-focused issue like the availability of publicly-funded services like healthcare, education or transport infrastructure in your neighbourhood. Or it could be about a property development that is before the local council that could affect your neighbourhood.

How that came about was that it was managed by a single older person who had passed away. Due to the loss of an administrator, the group effectively became a headless “zombie” group where there was no oversight over what was being posted.

That happened as the UK general election was around the corner with the politics “heating up” especially as the affected neighbourhood was in a marginal electorate. Here, the neighbourhood newsgroup “lost it” when it came to political content with the acrimony heating up after the close of polls. The site administrator’s widow even stated that the online group was being hijacked by others pushing their own agendas.

Subsequently, several members of that neighbourhood online forum stepped in to effectively wrest control and restore sanity to it. This included laying down rules against online bullying and hate speech along with encouraging proper decent courtesy on the bulletin board. It became hard to effectively steer back the forum to that sense of normalcy due to pushback by some members of the group and the established activity that occurred during the power vacuum.

This kind of behaviour, like all other misbehaviour facilitated through the Social Web and other Internet platforms, exploits the perceived distance that the Internet offers. It is something you wouldn’t do to someone face-to-face.

What was being identified was that there was a loss of effective management power for that online group due to the absence of a leader which maintained the group’s character and no-one effectively steps up to fill the void. This can easily happen with any form of online forum or bulletin board including an impromptu “group chat” set up on a platform like WhatsApp, Facebook Messenger or Viber.

It is like a real-life situation with an organisation like a family business where people have put in the hard yards to maintain a particular character. Then they lose the effective control of that organisation and no-one steps up to the plate to maintain that same character. This kind of situation can occur if there isn’t continual thought about succession planning in that organisation’s management especially if there aren’t any young people in the organisation who are loyal to its character and vision.

An online forum should have the ability and be encouraged to have multiple moderators with the same vision so others can “take over” if one isn’t able to adequately continue the job anymore. Here, you can discover and encourage potential moderators through their active participation online and in any offline events. But you would need to have some people who have some sort of computer and Internet literacy as moderators so they know their way around the system or require very minimal training.

The multiplicity of moderators can cater towards unforseen situations like death or sudden resignation. It also can assure that one of the moderators can travel without needing to have their “finger on the pulse” with that online community. In the same vein, if they or one of their loved ones falls ill or there is a personal calamity, they can concentrate on their own or their loved one’s recovery and rehabilitation or managing their situation.

There will be a reality that if a person moves out of a neighbourhood in good faith, they will have maintained regular contact with their former neighbours. Here they would be trying to keep their “finger on the pulse” regarding the neighbourhood’s character.  This fact can be exploited with managing a neighbourhood-focused online community by them being maintained as a “standby moderator” where they can be “roped in” to moderate the online community if there are too few moderators.

To keep the same kind of “vibe” within that online community that you manage will require many hands at the pump. It is not just a one-person affair.

Australian media raises the issue of fake celebrity and brand endorsements

Article

Event page for spammy Facebook event

Facebook is one of many online platforms being used for fake celebrity and brand endorsements

Networks warn of fake ads, scams. | TV Tonight

Media Watch broadcast on this topic | ABC

My Comments

An issue that has been called out at the end of April this year is the improper use of endorsements by celebrities and brands by online snake-oil salesmen.

ABC’s Media Watch and TV Tonight talked of this situation appearing on Facebook and other online advertising platforms. Typically the people and entities being affected were household names associated with the “screen of respect” in the household i.e. the TV screen in the lounge room. It ranged from the free-to-air broadcasters themselves including the ABC who adheres strictly to the principles established by the BBC about endorsement of commercial goods and services, as well as TV shows like “The Project” or “Sunrise”, or TV’s key personalities like Eddie McGuire and Jessica Rowe.

Lifehacker Website

…. as are online advertising platforms

Typically the ads containing the fake endorsements would appear as part of Facebook’s News Feed or in Google’s advertising networks, especially the search-driven Adwords network. I also see this as being of risk with other online ad networks that operate on a self-serve process and offer low-risk high-return advertising packages such as “cost-per-click-only” deals and had called this out in an earlier article about malvertisement activity.

There has been recent investigation activity by the Australian Competition and Consumer Commission concerning the behaviour of the Silicon Valley online-media giants and their impact on traditional media around the world. It will also include issues relating to Google and its control over online search and display advertising.

Facebook have been engaging in efforts to combat spam, inauthentic account behaviour and similar activity across its social-network brands. But they have found that it is a “whack-a-mole” effort where other similar sites or the same site pops up even if they shut it down successfully. I would suspect that a lot of these situations are based around pages or ads linking to a Website hosted somewhere on the Internet.

A question that was raised regarding this kind of behaviour is whether Facebook, Google and others should be making money out of these scam ads that come across their online platforms. This question would extend to the “estate agents” and “landlords” of cyberspace i.e. the domain-name brokers and the Webhosts who offer domain names or Webhosting space to people to use for their online presence.

There is also the idea of maintaining a respectable brand-safe family-and-workplace-friendly media experience in the online world which would be very difficult. This issue affects both the advertisers who want to work in a respectable brand-safe environment along with online publishers who don’t want their publications to convey a downmarket image especially if the invest time and money in creating quality content.

As we see more ad-funded online content appear, there will be the call by brands, publishers and users to gain control over the advertising ecosystem to keep scam advertising along with malvertisements at bay along with working against ad fraud. It will also include verifying the legitimacy of any endorsements that are associated with a brand or personality.

A good practice for advertisers and publishers in the online space would be to keep tabs on the online advertising beheaviour that is taking place. For example, an advertiser can keep reporting questionable impressions of their advertising campaigns including improper endorsement activity while a publisher can report ads for fly-by-night activity that appear in their advertising space to the ad networks they use. Or users could report questionable ads on the Social Web to the various social network platforms they see them appear on.

Facebook clamps down on voter-suppression misinformation

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Facebook Extends Ban On Election Fakery To Include Lies About Voting Requirements | Gizmodo

From the horse’s mouth

Facebook

Expanding Our Policies on Voter Suppression (Press Release)

My Comments

Over recent years, misinformation and fake news has been used as a tool to attack the electoral process in order to steer the vote towards candidates or political parties preferred by powerful interests. This has been demonstrated through the UK Brexit referendum and the the USA Presidential Election in 2016 with out-of-character results emanating from the elections. It has therefore made us more sensitive to the power of misinformation and its use in influencing an election cycle, with most of us looking towards established news outlets for our political news.

Another attack on the electoral process in a democracy is the use of misinformation or intimidation to discourage people from registering on the electoral rolls including updating their electoral-roll details or turning up to vote. This underhand tactic is typically to prevent certain communities from casting votes that would sway the vote away from an area-preferred candidate.

Even Australia, with its compulsory voting and universal suffrage laws, isn’t immune from this kind of activity as demonstrated in the recent federal byelection for the Batman (now Cooper) electorate. Here, close to the election day, there was a robocall campaign targeted at older people north of the electorate who were likely to vote in an Australian Labour Party candidate rather than the area-preferred Greens candidate.

But this is a very common trick performed in the USA against minority, student or other voters to prevent them casting votes towards liberal candidates. This manifests in accusations about non-citizens casting votes or the same people casting votes in multiple electorates.

Facebook have taken further action against voter-suppression misinformation by including it in their remit against fake news and misinformation. This action has been taken as part of Silicon Valley’s efforts to work against fake news during the US midterm Congressional elections.

At the moment, this effort applies to information regarding exaggerated identification or procedural requirements concerning enrolment on the electoral rolls or casting your vote. It doesn’t yet apply to reports about conditions at the polling booths like opening hours, overcrowding or violence. Nor does this effort approach the distribution of other misinformation or propaganda to discourage enrolment and voting.

US-based Facebook end-users can use the reporting workflow to report voter-suppression posts to Facebook. This is through the use of an “Incorrect Voting Info” option that you select when reporting posted content to Facebook. Here, it will allow this kind of information to be verified by fact-checkers that are engaged by Facebook, with false content “buried” in the News Feed along with additional relevant content being supplied with the article when people discover it.

This is alongside a constant Facebook effort to detect and remove fake accounts existing on the Facebook platform along with increased political-content transparency across its advertising platforms.

As I have always said, the issue regarding misleading information that influences the election cycle can’t just be handled by social-media and advertising platforms themselves. These platforms need to work alongside the government-run electoral-oversight authorities and similar organisations that work on an international level to exchange the necessary intelligence to effectively identify and take action against electoral fraud and corruption.

How can social media keep itself socially sane?

BroadcastFacebook login page

Four Corners (ABC Australia) – Inside Facebook

iView – Click to view

Transcript

My Comments

I had just watched the Four Corners “Inside Facebook” episode on ABC TV Australia which touched on the issues and impact that Facebook was having concerning content that is made available on that platform. It was in relationship to recent questions concerning the Silicon Valley social-media and content-aggregation giants and what is their responsibility regarding content made available by their users.

I also saw the concepts that were raised in this episode coming to the fore over the past few weeks with the InfoWars conspiracy-theory site saga that was boiling over in the USA. There, concern was being raised about the vitriol that the InfoWars site was posting up especially in relationship to recent school shootings in that country. At the current time, podcast-content directories like Spotify and Apple iTunes were pulling podcasts generated by that site while

The telecast highlighted how the content moderation staff contracted by Facebook were handling questionable content like self-harm, bullying and hate speech.

For most of the time, Facebook took a content-moderation approach where the bare minimum action was required to deal with questionable content. This was because if they took a heavy-handed approach to censoring content that appeared on the platform, end-users would be drifting away from it. But recent scandals and issues like the Cambridge Analytica scandal and the allegations regarding fake news have been bringing Facebook on edge regarding this topic.

Drawing attention to and handling questionable content

At the moment, Facebook are outsourcing most of the content-moderation work to outside agencies and have been very secretive about how this is done. But the content-moderation workflow is achieved on a reactive basis in response to other Facebook users using the “report” function in the user-interface to draw their attention to questionable content.

This is very different to managing a small blog or forum which is something one person or a small number of people could do thanks to the small amount of traffic that these small Web presences could manage. Here, Facebook is having to engage these content-moderation agencies to be able to work at the large scale that they are working at.

The ability to report questionable content, especially abusive content, is compounded by a weak user-experience that is offered for reporting this kind of content. It is more so where Facebook is used on a user interface that is less than the full Web-based user experience such as some native mobile-platform apps.

This is because, in most democratic countries, social media unlike traditional broadcast media is not subject to government oversight and regulation. Nor is it subject to oversight by “press councils” like what would happen with traditional print media.

Handling content

When a moderator is faced with content that is identified as having graphic violence, they have the option to ignore the content – leave it as is on the platform, delete the content – remove it from the platform, or mark as disturbing – the content is subject to restrictions regarding who can see the content and how it is presented including a warning notice that requires the user to click on the notice before the content is shown. As well, they can notify the publisher who put up the content about the content and the action that has been done with it. In some cases, the content being “marked as disturbing” may be a method used to raise common awareness about the situation being portrayed in the content.

They also touched on dealing with visual content depicting child abuse. One of the factors raised is that the the more views that content depicting abuse multiplies the abuse factor against the victim of that incident.

As well, child-abuse content isn’t readily reported to law-enforcement authorities unless it is streamed live using Facebook’s live-video streaming function. This is because the video clip could be put up by someone at a prior time and on-shared by someone else or it could be a link to content already hosted somewhere else online. But Facebook and their content-moderating agencies engages child-safety experts as part of their moderating team to determine whether it should be reported to law enforcement (and which jurisdiction should handle it).

When facing content that depicts suicide, self-harm or similar situations, the moderating agencies treat these as high-priority situations. Here, if the content promotes this kind of self-destructive behaviour, it is deleted. On the other hand, other material is flagged as to show a “checkpoint” on the publisher’s Facebook user interface. This is where the user is invited to take advantage of mental-health resources local to them and are particular to their situation.

But it is a situation where the desperate Facebook user is posting this kind of content as a personal “cry for help” which isn’t healthy. Typically it is a way to let their social circle i.e. their family and friends know of their personal distress.

Another issue that has also been raised is the existence of underage accounts where children under 13 are operating a Facebook presence by lying about their age, But these accounts are only dealt with if a Facebook user draws attention to the existence of that account.

An advertising–driven platform

What was highlighted in the Four Corners telecast was that Facebook, like the other Silicon Valley social-media giants make most of their money out of on-site advertising. Here, the more engagement that end-users have with these social-media platforms, the more the advertising appears on the pages including the appearance of new ads which leads to more money made by the social media giant.

This is why some of the questionable content still exists on Facebook and similar platforms so as to increase engagement with these platforms. It is although most of us who use these platforms aren’t likely to actively seek this kind of content.

But this show hadn’t even touched on the concept of “brand safety” which is being raised in the advertising industry. This is the issue of where a brand’s image is likely to appear next to controversial content which could be seen as damaging to the brand’s reputation, and is a concept highly treasured by most consumer-facing brands maintaining the “friendly to family and business” image.

A very challenging task

Moderating staff will also find themselves in very mentally-challenging situations while they do this job because in a lot of cases, this kind of disturbing content can effectively play itself over and over again in their minds.

The hate speech quandary

The most contentious issue that Facebook, like the rest of the Social Web, is facing is hate speech. But what qualifies as hate speech and how obvious does it have to be before it has to be acted on? This broadcast drew attention initially to an Internet meme questioning “one’s (white) daughter falling in love with a black person” but doesn’t underscore an act of hatred. The factors that may be used as qualifiers may be the minority group, the role they are having in the accusation, the context of the message, along with the kind of pejorative terms used.

They are also underscoring the provision of a platform to host legitimate political debate. But Facebook can delete resources if a successful criminal action was taken against the publisher.

Facebook has a “shielded” content policy for highly-popular political pages, which is something similarly afforded to respected newspapers and government organisations; and such pages could be treated as if they are a “sacred cow”. Here, if there is an issue raised about the content, the complaint is taken to certain full-time content moderators employed directly by Facebook to determine what action should be taken.

A question that was raised in the context of hate speech was the successful criminal prosecution of alt-right activist Tommy Robinson for sub judice contempt of court in Leeds, UK. Here, he had used Facebook to make a live broadcast about a criminal trial in progress as part of his far-right agenda. But Twitter had taken down the offending content while Facebook didn’t act on the material. From further personal research on extant media coverage, he had committed a similar contempt-of-court offence in Canterbury, UK, thus underscoring a similar modus operandi.

A core comment that was raised about Facebook and the Social Web is that the more open the platform, the more likely one is to see inappropriate unpleasant socially-undesirable content on that platform.

But Facebook have been running a public-relations campaign regarding cleaning up its act in relation to the quality of content that exists on the platform. This is in response to the many inquiries it has been facing from governments regarding fake news, political interference, hate speech and other questionable content and practices.

Although Facebook is the common social-media platform in use, the issues draw out regarding the posting of inappropriate content also affect other social-media platforms and, to some extent, other open freely-accessible publishing platforms like YouTube. There is also the fact that these platforms can be used to link to content already hosted on other Websites like those facilitated by cheap or free Web-hosting services.

There may be some depression, suicide and related issues that I have covered in this article that may concern you or someone else using Facebook. Here are some numbers for relevant organisations in your area who may help you or the other person with these issues.

Australia

Lifeline

Phone: 13 11 14
http://lifeline.org.au

Beyond Blue

Phone: 1300 22 46 36
http://beyondblue.org.au

New Zealand

Lifeline

Phone: 0800 543 354
http://lifeline.org.nz

Depression Helpline

Phone: 0800 111 757
https://depression.org.nz/

United Kingdom

Samaritans

Phone: 116 123
http://www.samaritans.org

SANELine

Phone: 0300 304 7000
http://www.sane.org.uk/support

Eire (Ireland)

Samaritans

Phone: 1850 60 90 90
http://www.samaritans.org

USA

Kristin Brooks Hope Center

Phone: 1-800-SUICIDE
http://imalive.org

National Suicide Prevention Lifeline

Phone: 1-800-273-TALK
http://www.suicidepreventionlifeline.org/

Google and Facebook are starting to bring accountability to political advertising

Articles

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote without undue influence? (Courtesy of Australian Electoral Commission)

Facebook announces major changes to political ad policies | NBC News

Facebook reveals new political ad policies in wake of U.S. election | VentureBeat

What Can and Can’t You Do with Political Advertising on Facebook? | Spatially

Google Joins Facebook In Banning All Ads Related To Ireland’s Big Abortion Vote | Gizmodo

From the horse’s mouth

Facebook

Update on Our Advertising Transparency and Authenticity Efforts {Press Release)

Facebook will not be accepting referendum related ads from advertisers based outside of Ireland {Press Release)

Google

Supporting election integrity through greater advertising transparency (Blog Post)

My Comments

Over the last five months, a strong conversation has risen surrounding electioneering and political advertising on the online platforms including social media and online advertising.

The trends concerning this activity is that the political advertising spend is moving away from traditional print and broadcast media towards online media as we make more use of highly-portable computing devices to consume our information and entertainment.

Issues that have also been raised include the use of fake comments and pre-programmed auto-responding “bots” as part of political campaigns. This is alongside the rise of very divisive political campaigns during the 2016 Brexit and US Presidential election cycles that played on racial and religious prejudices. There is also the fact that nation states with improper intentions are seeing the idea of poisoning the information flow as another weapon in their cyber-warfare arsenal.

It has also been facilitated through the use of highly-focused data-driven campaign-targeting techniques based on factors like race, gender, location and interests, with this practice being highlighted in the Cambridge Analytica saga that caught up Facebook and Twitter.

As well, the online advertising and social media platforms have made it easy to create and maintain an advertising or editorial campaign that transcends jurisdictional borders. This is compared to traditional media that would be dependent on having the advertising material pass muster with the media outlet’s advertising staff in the outlet’s market before it hits the presses or the airwaves.

This issue will become more real with the use of addressable TV advertising which is currently practised with some advertising-based video-on-demand services and some cable-TV platforms but will become the norm with traditional linear TV being delivered through through the increasing use of interactive-TV platforms.

This technology would facilitate “hyper-targeting” of political campaigns such as municipal-level or postcode/ZIP-code targeting yet maintain the same “air of legitimacy” that the traditional TV experience provides, making it feasible to destabilise elections and civil discourse on the local-government level.

Election-oversight authorities in the various jurisdictions like the Australian Electoral Commission or the UK’s Electoral Commission have been doing battle with the online trend because most of the legislation and regulation surrounding political and election activities has been “set in stone” before the rise of the Internet. For example, in most jurisdictions, you will see or hear a disclosure tag after a political advertisement stating which organisation or individual was behind that ad. Or there will be financial reporting and auditing requirements for the election campaigns that take place before the polls.

Facebook and Google are having to face these realities through the use of updated advertising-platform policies which govern political advertising, But Facebook applies this to candidate-based campaigns and issues-based campaigns while Google applies this to candidate-based campaigns only at the time of writing.

Firstly there is a prohibition on political advertising from entities foreign to the jurisdiction that the ad is targeted for. This is in line with legislation and regulation implemented by most jurisdictions proscribing foreign donations to political campaigns affecting that jurisdiction.

This is augmented through a requirement for political advertisers to furnish proof of identity and residence in the targeted jurisdiction. In the case of Facebook, they apply this policy to pages and profiles with very large followings as well as ads. Similarly, they implement a postcard-based proof-of-residence procedure where they send a postcard by snail mail to the user’s US-based home / business address to very presence in the USA.

Facebook augments this requirement by using artificial-intelligence to flag if an ad is political or not, so they can make sure that the advertiser is complying with the requirements for political advertising on this platform.

Like with traditional media, political ads on both these platforms will be required to have a disclosure tag. But Facebook goes further by making this a hyperlink that end-users can click on to see details like verification documents, why the viewer saw the ad along with a link to the sponsoring organisation’s Facebook Page. This has more utility than the slide shown at the end of a TV or online ad, the voice-announcement at the end of a radio ad or small text at the bottom of a print-media ad or billboard poster which most of these tags represent.

Both of the Internet titans will also make sure details about these campaigns are available and transparent to end-users so they know what is going on. For example, Facebook requires advertisers to maintain a Facebook Page before they buy advertising on any of the Facebook-owned platforms. This will have a “View Ads” tab which includes details about targeting of each current and prior campaign with a four-year archive allowance.

Google has taken things further by making sure that political organisations, politicians, the media and journalists are aware of the resources they have to assure data security for their campaigns and other efforts. Here, they have prepared a “Protect Your Election” Webpage that highlights the resources that they provide that are relevant for each kind of player in a political campaign. This includes Project Shield to protect Websites against distributed denial-of-service attacks, along with enhanced security measures available to operators of Google Accounts associated with critical data.

Both companies have been implementing these procedures for North America with Facebook trying them out in Canada then “cementing” them in to the USA before the midterm Congress election cycle there. Both companies then took action to suspend political ads from foreign entities outside Ireland during the election cycle for the Eighth Amendment abortion referendum taking place in that country. Here, they have applied the prohibition until the close of polls on May 25 2018. Let’s not forget that these standards will be gradually rolled out in to other jurisdictions over time.

But what I would like to see is for companies who run online advertising and social-media activity to liaise strongly with election-oversight officials in the various jurisdictions especially if it affects a currently-running poll or one that is to take place in the near future. This is in order to advise these officials of any irregularities that are taking place with political advertising on their online platforms or for the officials to notify them about issues or threats that can manifest through the advertising process.

 

Australian government to investigate the role of Silicon Valley in news and current affairs

Articles

Facebook login page

Facebook as a social-media-based news aggregator

Why the ACCC is investigating Facebook and Google’s impact on Australia’s news media | ABC News (Australia)

ACCC targets tech platforms | InnovationAus.com

World watching ACCC inquiry into dominant tech platforms | The Australian (subscription required)

Australia: News and digital platforms inquiry | Advanced Television

My Comments

A question that is being raised this year is the impact that the big technology companies in Silicon Valley, especially Google and Facebook, are having on the global media landscape. This is more so in relationship to established public, private and community media outlets along with the sustainability for these providers to create high-quality news and journalistic content especially in the public-affairs arena.

Google News - desktop Web view

Google News portal

It is being brought about due to the fact that most of us are consuming our news and public-affairs content on our computers, tablets and smartphones aided and abetted through the likes of Google News or Facebook. This can extend to things like use of a Web portal or “news-flash” functionality on a voice-driven assistant.

This week, the Australian Competition and Consumer Commission have commenced an inquiry into Google and Facebook in regards to their impact on Australian news media. Here, it is assessing whether there is real sustainable competition in the media and advertising sectors.

Google Home and similar voice-driven home assistants becoming another part of the media landscape

There is also the kind of effect Silicon Valley is having on media as far as consumers (end-users), advertisers, media providers and content creators are concerned. It also should extend to how this affects civil society and public discourse.

It has been brought about in response to the Nick Xenophon Team placing the inquiry as a condition of their support for the passage of Malcolm Turnbull’s media reforms through the Australian Federal Parliament.

A US-based government-relations expert saw this inquiry as offering a global benchmark regarding how to deal with the power that Silicon Valley has over media and public opinion with a desire for greater transparency between traditional media and the big tech companies.

Toni Bush, executive vice president and global head of government affairs, News Corporation (one of the major traditional-media powerhouses of the world) offered this quote:

“From the EU to India and beyond, concerns are rising about the power and reach of the dominant tech platforms, and they are finally being scrutinised like never before,”

What are the big issues being raised in this inquiry?

One of these is the way Google and Facebook are offering news and information services effectively as information aggregators, This is either in the form of providing search services with Google ending up as a generic trademark for searching for information on the Internet; or social-media sharing in the case of Facebook. Alongside this is the provisioning of online advertising services and platforms for online media providers both large and small. This is infact driven by data which is being seen as the “new oil” of the economy.

A key issue often raised is how both these companies and, to some extent, other Silicon Valley powerhouses are changing the terms of engagement with content providers without prior warning. This is often in the form of a constantly-changing search algorithm or News Feed algorithm; or writing the logic behind various features like Google Accelerated Mobile Pages or Facebook Instant Articles to point the user experience to resources under their direct control rather than the resources under the control of the publisher or content provider. These issues relate to the end user having access to the publisher’s desktop or mobile user experience which conveys that publisher’s branding or provides engagement and monetisation opportunities for the publisher such as subscriptions, advertising or online shopfronts..

This leads to online advertising which is very much the direction of a significant part of most businesses’ advertising budgets. What is being realised is that Google has a strong hand in most of the online search, display and video advertising, whether through operating commonly-used ad networks like Adsense,  Adwords or the Google Display Network; or through providing ad management technology and algorithms to ad networks, advertisers and publishers.

In this case, there are issues relating to ad visibility, end-user experience, brand safety, and effective control over content.

This extends to what is needed to allow a media operator to sustainably continue to provide quality content. It is irrespective of whether they are large or small or operating as a public, private or community effort.

Personally I would like to see it extend to small-time operators such as what represents the blogosphere including podcasters and “YouTubers” being able to create content in a sustainable manner and able to “surface above the water”. This can also include whether traditional media could use material from these sources and attribute and renumerate their authors properly, such as a radio broadcaster syndicating a highly-relevant podcast or a newspaper or magazine engaging a blogger as a freelance columnist.

Other issues that need to be highlighted

I have covered on this site the kind of political influence that can be wielded through online media, advertising and similar services. It is more so where the use of these platforms in the political context is effectively unregulated territory and can happen across different jurisdictions.

One of these issues was use of online advertising platforms to run political advertising during elections or referendums. This can extend to campaign material being posted as editorial content on online resources at the behest of political parties and pressure groups.

Here, most jurisdictions want to maintain oversight of these activity under the context of overseeing political content that could adversely influence an election and the municipal government in Seattle, Washington want to regulate this issue regarding local elections. This can range from issues like attribution of comments and statements in advertising or editorial material through the amount of time the candidates have to reach the electorate to mandatory blackouts or “cooling-off” periods for political advertising before the jurisdiction actually goes to the polls.

Another issue is the politicisation of responses when politically-sensitive questions are being posed to a search engine or a voice-driven assistant of the Amazon Alexa, Apple Siri or Google Assistant kind. Here, the issue with these artificial-intelligence setups is that they could be set up to provide biased answers according to the political agenda that the company behind the search engine, voice-driven assistant or similar service is behind.

Similarly, the issue of online search and social-media services being used to propagate “fake news” or propaganda disguised as news is something that will have to be raised by governments. It has become a key talking point over the past two years in relationship with the British Brexit referendum, the 2016 US Presidential election and other recent general elections in Europe. Here, the question that could be raised is whether Google and Facebook are effectively being “judge, jury and executioner” through their measures  or whether traditional media is able to counter the effective influence of fake news.

Conclusion

What is happening this year is that the issue of how Silicon Valley and its Big Data efforts is able to skew the kind of news and information we get. It also includes whether the Silicon Valley companies need to be seen as another influential media company and what kind if regulation is needed in this scenario.

Silicon Valley starts a war against fake news

Article

Facebook and Google to block ads on fake news websites | Adnews

Facebook Employees Are In Revolt Over Fake News | Gizmodo

Google and Facebook Take Aim at Fake News Sites | New York Times

Does the internet have a fake-news problem? | CNet

Google CEO says fake news is a problem and should not be distributed | The Verge

Want to keep fake news out of your newsfeed? College professor creates list of sites to avoid | Los Angeles Times

My Comments

Since Donald Trump gained election victory in the USA, there has been some concern amongst a few of Silicon Valley’s tech companies regarding the existence of “fake news”.

This is typically a story that is presented in order to refer to an actual news event but doesn’t relate to any actual news event. In some cases, such stories a hyped-up versions of an existing news item but in a lot of cases, these stories are built up on rumours.

The existence of Internet-distributed fake news has been of concern amongst journalists especially where newsroom budgets are being cut back and more news publishers and broadcasters are resorting to “rip-and-read” journalism, something previously associated with newscasts provided by music-focused FM radio stations.

Similarly, most of us are using Internet-based news sources as part of our personal news-media options or or only source of news, especially when we are using portable devices like ultraportable laptops, tablets or smartphones as our main Internet terminals for Web browsing.

Silicon Valley also see the proliferation of fake news as a threat to the provision of balanced coverage of news and opinion because they see this as a vehicle for delivering the populist political agenda rather than level-headed intelligent news. This is typically because the headline and copy in “fake news” reports is written in a way to whip up an angry sentiment regarding the topics concerned, thus discouraging further personal research.

But Facebook and Google are tackling this problem initially by turning off the advertising-money tap for fake-news sites. Facebook will apply this to ad-funded apps that work alongside these sites while Google will apply this as a policy for people who sign up to the AdSense online display-ads platform.

There is the issue of what kind of curating exists in the algorithms that list search results or news items on a search-engine or social-media page. It also includes how the veracity of news content is being deemed, even though Google and Facebook are avoiding being in a position where they can be seen as “arbiters of truth”.

The big question that can exist is what other actions could Silicon Valley take to curb the dissemination of fake news beyond just simply having their ad networks turn off the supply of advertising to these sites? This is because the popular search engines are essentially machine-generated indexes of the Web, while the Social Web and the blogosphere are ways where people share links to resources that exist on the Web.

Some people were suggesting the ability for a search engine like Google or a social network site like Facebook to have its user interface “flag” references to known fake-news stories, based on user or other reports. Similarly, someone could write desktop or mobile software like a browser add-on that does this same thing, or simply publish a publicly-available list of known “fake-news” Websites for people to avoid.

This is infact an angle that a US-based college professor had taken where she prepared a Google Docs resource listing the Websites hosting that kind of news, in order to help people clean their RSS newsfeeds of misinformation, with some mainstream online news sources including the New York Magazine providing a link to this resource.

The issue of fake news distributed via the Internet is becoming a real problem, but Silicon Valley is looking at different ways to solve this problem and bring to it the same level of respect that was associated with traditional media.

You could be using your phone to sign in to Facebook on the big screen

Article

Apple TV 4th Generation press picture courtesy of Apple

You could be able to log in to Facebook on this device using your smartphone’s Facebook client

Facebook Login Updated for tvOS, FireTV, Android | AdWeek SocialTimes

From the horse’s mouth

Facebook

Developer News Press Release

Improving Facebook Login For TV and Android

My Comments

A holy grail that is being achieved for online services is to allow users to authenticate with these services when using a device that has a limited user interface.

TV remote control

A typical smart-TV remote control that can only offer “pick-and-choose” or 12-key data entry

An example of this is a Smart TV or set-top device, where the remote control for these devices has a D-pad and a numeric keypad. Similarly, you have a printer where the only interface is a D-pad or touchscreen, with a numeric keypad only for those machines that have fax capabilities.

Here, it would take a long time to enter one’s credentials for these services due to the nature of the interface. This is down to a very small software keyboard on a touchscreen, using “SMS-style” text entry on the keypad or “pick-and-choose” text entry using the D-pad.

Facebook initially looked at this problem by displaying an authentication code on the device’s user interface or printing this code out when you want to use it from that device. Then you go to a Web-enabled computer or mobile device and log in to facebook.com/device and transcribe that code in to the page to authenticate the device with Facebook.

Here, they are realising that these devices have some role with the Social Web, whether to permit single sign-on, allow you to view photos on your account or use it as part of a comment trail. But they also know that most of us are working our Facebook accounts from our smartphones or tablets very frequently and are doing so with their native mobile client app.

But they are taking a leaf out of DIAL (DIscovery And Launch) which is being used as a way to permit us to throw YouTube or Netflix sessions that we start on our mobile devices to the big screen via our home networks. It avoids a long rigmarole of finding a “pairing screen” on both the large-screen and mobile apps, then transcribing a PIN or association code from the large screen to the mobile client to be able to have it on the TV screen,

This is where you will end up authenticating that big-screen app's Facebook login request

This is where you will end up authenticating that big-screen app’s Facebook login request

What Facebook are now doing for the 4th generation Apple TV (tvOS) and Android-based TV/video peripheral platforms (Android TV / Amazon FireTV) is to use the mobile client app to authenticate.

Here, you use a newer version of the Facebook mobile client, the Facebook Lite client or the Google Chrome Custom Tabs to authenticate with the big screen across the home network. The TV or set-top device, along with the mobile device running the Facebook mobile client both have to be on the same logical network which would represent most small networks. It is irrespective of how each device is physically connected to the network such as a mobile device using Wi-Fi wireless and the Apple TV connected via HomePlug AV500 powerline to the router for reliability.

What will happen is that the TV app that wants to use Facebook will show an authentication code on the screen. Then you go to the “hamburger” icon in your Facebook mobile client and select “Device Requests” under Apps. There will be a description of the app and the device that is wanting you to log in, along with the authentication code you saw an the TV screen. Once you are sure, you would tap “Confirm” to effectively log in from the big screen.

At the moment, this functionality is being rolled out to tvOS and Android-based devices with them being the first two to support the addition and improvement of application programming interfaces. But I would see this being rolled out for more of the Smart TV, set-top box and similar device platforms as Facebook works through them all.

Spotify login screen

This kind of single-sign-on could apply to your Smart TV

One issue that may have to crop up would be to cater for group scenarios, which is a reality with consumer electronics that end up being used by all of the household. Here, software developers may want to allow multiple people to log in on the same device, which may be considered important for games with a multiplayer element, or to allow multiple users to be logged in but with one user having priority over the device at a particular time like during an on-screen poll or with a photo app.

Another question that could be raised is where Facebook is used as the “hub” of a user’s single-sign-on experience. Here, an increasing number of online services including games are implementing Facebook as one of the “social sign-on” options and the improved sign-on experience for devices could be implemented as a way to permit this form of social sign-on across the apps and services offered on a Smart TV for example. It could subsequently be feasible to persist current login / logout / active-user status across one device with all the apps following that status.

Other social-media, messaging or similar platforms can use this technology as a way to simplify the login process for client-side devices that use very limited user interfaces. This is especially where the smartphone becomes the core device where the user base interacts with these platforms frequently.