Social Web Archive

Being cautious about fake news and misinformation in Australia

Previous Coverage

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Being aware of fake news in the UK

Fact-checking now part of the online media-aggregation function

Useful Australian-based resources

ABC Fact Check – ran in conjunction with RMIT University

Political Parties

Australian Labor Party (VIC, NSW)

Liberal Party – work as a coalition with National Party (VIC, NSW)

National Party – work as a coalition with Liberal Party (VIC, NSW)

Australian Greens – state branches link from main page

One Nation (Pauline Hanson)

Katter’s Australia Party

Derryn Hinch’s Justice Party

Australian Conservatives

Liberal Democratic Party

United Australia Party

My Comments

Over the next six months, Australia will see some very critical general elections come to pass both on a federal level and in the two most-highly-populated states that host most of that country’s economic and political activity. On October 30 2018, the election writs were recently served in the state of Victoria for its general election to take place on November 24 2018. Then, on the 23 March 2019, New South Wales will expect to go to the polls for its general election. Then the whole country will expect to go to the polls for the federal general election by 18 May 2019.

As these election cycles take place over a relatively short space of time and affecting , there is a high risk that Australians could fall victim to misinformation campaigns. This can subsequently lead to state and federal ballots being cast that steer the country against the grain like what happened in 2016 with the USA voting in Donald Trump as their President and the UK voting to leave the European Union.

Google News - desktop Web view

Look for tags within Google News that describe the context of the story

The issue of fake news and misinformation is being seen as increasingly relevant as we switch away from traditional media towards social media and our smartphones, tablets and computers for our daily news consumption.  This is thanks to the use of online search and news-aggregation services like Google News; or social media like Facebook or Twitter which can be seen by most of us as an “at-a-glance” view of the news.

As well, a significant number of well-known newsrooms are becoming smaller due to the reduced circulation and ratings for their newspaper or radio / TV broadcast thanks to the use of online resources for our news. It can subsequently lead to poor-quality news reporting and presentation with a calibre equivalent to the hourly news bulletin offered by a music-focused radio station. It also leads to various mastheads plagiarising content from other newsrooms that place more value on their reporting.

The availability of low-cost or free no-questions-asked Web and video hosting along with easy-to-use Web-authoring, desktop-publishing and desktop-video platforms make it feasible for most people to create a Web site or online video channel. It has led to an increased number of Websites and video channels that yield propaganda and information that is dressed up as news but with questionable accuracy.

Another factor that has recently been raised in the context of fake news, misinformation and propaganda is the creation and use of deepfake image and audio-visual content. This is where still images, audio or video clips that are in the digital domain are altered to show a falsehood using artificial-intelligence technology in order to convince viewers that they are dealing with original audio-visual resource. The audio content can be made to mimic an actual speaker’s voice and intonation as part of creating a deepfake soundbite or video clip.

It then becomes easy to place fake news, propaganda and misinformation onto easily-accessible Web hosts including YouTube in the case of videos. Then this content would be propagated around the Internet through the likes of Twitter, Facebook or online bulletin boards. It is more so if this content supports our beliefs and enhances the so-called “filter bubble” associated with our beliefs and media use.

There is also the fact that newsrooms without the resources to rigorously scrutinise incoming news could pick this kind of content up and publish or broadcast this content. This can also be magnified with media that engages in tabloid journalism that depends on sensationalism to get the readership or keep listeners and viewers from switching away.

The borderless nature of the Internet makes it easy to set up presence in one jurisdiction to target the citizens of another jurisdiction in a manner to avoid being caught by that jurisdiction’s election-oversight, broadcast-standards or advertising-standards authority. Along with that, a significant number of jurisdictions focus their political-advertising regulation towards the traditional media platforms even though we are making more use of online platforms.

Recently, the Australian Electoral Commission along with the Department of Home Affairs, Australian Federal Police and ASIO have taken action on an Electoral Integrity Assurance Task Force. It was in advance of recent federal byelections such as the Super Saturday byelections, where there was the risk of clandestine foreign interference taking place that could affect the integrity of those polls.

But the issue I am drawing attention to here is the use of social media or other online resources to run fake-news campaigns to sway the populace’s opinion for or against certain politicians. This is exacerbated by the use of under-resourced newsrooms that could get such material seen as credible in the public’s eyes.

But most of Silicon Valley’s online platforms are taking various steps to counter fake news, propaganda and disinformation using these following steps.

Firstly, they are turning off the money-supply tap by keeping their online advertising networks away from sites or apps that spread misinformation.

They also are engaging with various fact-check organisations to identify fake news that is doing the rounds and tuning their search and trending-articles algorithms to bury this kind of content.

Autocomplete list in Google Search Web user interface

Google users can report Autocomplete suggestions that they come across in their search-engine experience/

They are also maintaining a feedback loop with their end-users by allowing them to report fake-news entries in their home page or default view. This includes search results or autocomplete entries in Google’s search-engine user interface. This is facilitated through a “report this” option that is part of the service’s user interface or help pages.

Most of the social networks and online-advertising services are also implementing robust user-account-management and system-security protocols. This includes eliminating or suspending accounts that are used for misinformation. It also includes checking the authenticity of accounts running pages or advertising campaigns that are politically-targeted through methods like street-address verification.

In the case of political content, social networks and online-advertising networks are implementing easily-accessible archives of all political advertising or material that is being published including where the material is being targeted at.

ABC FactCheck – the ABC’s fact-checking resource that is part of their newsroom

Initially these efforts are taking place within the USA but Silicon Valley is rolling them out across the world at varying timeframes and with local adaptations.

Personally, I would still like to see a strong dialogue between the various Social Web, search, online-advertising and other online platforms; and the various government and non-government entities overseeing election and campaign integrity and allied issues. This can be about oversight and standards regarding political communications in the online space along with data security for each stakeholder.

What can you do?

Look for any information that qualifies the kind of story if you are viewing a collection of headlines like a search or news-aggregation site or app. Here you pay attention to tags or other metadata like “satire”, “fact checking” or “news” that describe the context of the story or other attributes.

Most search engines and news-aggregation Websites will show up this information in their desktop or mobile user interface and are being engineered to show a richer set of details. You may find that you have to do something extra like click a “more” icon or dwell on the heading to bring up this extra detail on some user interfaces.

Trust your gut reaction to that claim being shared around social media. You may realise that a claim associated with fake news may be out of touch with reality. Sensationalised or lurid headlines are a usual giveaway, along with missing information or copy that whips up immediate emotional responses from the reader.

Check the host Website or use a search engine like Google to see if the news sources you trust do cover that story. You may come across one or more tools that identify questionable news easily, typically in the form of a plug-in or extension that works with your browser if its functionality can be expanded with these kind of add-ons. It is something that is more established with browsers that run on regular Windows, Mac or Linux computers.

It is also a good idea to check for official press releases or similar material offered “from the horse’s mouth” by the candidates, political parties, government departments or similar organisations themselves. In some cases during elections, some of the candidates may run their own Web sites or they may run a Website that links from the political party’s Website. Here, you will find them on the Websites ran by these organisations and may indicate if you are dealing with a “beat-up” or exaggeration of the facts.

As you do your online research in to a topic, make sure that you are familiar with how the URLs are represented on your browser’s address bar for the various online resources that you visit. Here, be careful if a resource has more than is expected between the “.com”, “.gov.au” or similar domain-name ending and the first “/” leading to the actual online resource.

Kogan Internet table radio

Sometimes the good ol’ radio can be the trusted news source

You may have to rely on getting your news from one or more trusted sources. This would include the online presence offered by these sources. Or it may be about switching on the radio or telly for the news or visiting your local newsagent to get the latest newspaper.

Examples of these are: the ABC (Radio National, Local radio, News Radio, the main TV channel and News 24 TV channel), SBS TV, or the Fairfax newspapers. Some of the music radio stations that are part of a family run by a talk-radio network like the ABC with their ABC Classic FM or Triple J services will have their hourly newscast with news from that network. But be careful when dealing with tabloid journalism or commercial talkback radio because you may be exposed to unnecessary exaggeration or distortion of facts.

As well, use the social-network platform’s or search engine’s reporting functionality to draw attention to fake news, propaganda or misinformation that is being shared or highlighted on that online service. In some cases like reporting inappropriate autocomplete predictions to Google, you may have to use the platform’s help options to hunt for the necessary resources.

Here, as we Australians faces a run of general-election cycles that can be very tantalising for clandestine foreign interference, we have to be on our guard regarding fake news, propaganda and misinformation that could affect the polls.

Send to Kindle

Facebook clamps down on voter-suppression misinformation

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Facebook Extends Ban On Election Fakery To Include Lies About Voting Requirements | Gizmodo

From the horse’s mouth

Facebook

Expanding Our Policies on Voter Suppression (Press Release)

My Comments

Over recent years, misinformation and fake news has been used as a tool to attack the electoral process in order to steer the vote towards candidates or political parties preferred by powerful interests. This has been demonstrated through the UK Brexit referendum and the the USA Presidential Election in 2016 with out-of-character results emanating from the elections. It has therefore made us more sensitive to the power of misinformation and its use in influencing an election cycle, with most of us looking towards established news outlets for our political news.

Another attack on the electoral process in a democracy is the use of misinformation or intimidation to discourage people from registering on the electoral rolls including updating their electoral-roll details or turning up to vote. This underhand tactic is typically to prevent certain communities from casting votes that would sway the vote away from an area-preferred candidate.

Even Australia, with its compulsory voting and universal suffrage laws, isn’t immune from this kind of activity as demonstrated in the recent federal byelection for the Batman electorate. Here, close to the election day, there was a robocall campaign targeted at older people north of the electorate who were likely to vote in an Australian Labour Party candidate rather than the area-preferred Greens candidate.

But this is a very common trick performed in the USA against minority, student or other voters to prevent them casting votes towards liberal candidates. This manifests in accusations about non-citizens casting votes or the same people casting votes in multiple electorates.

Facebook have taken further action against voter-suppression misinformation by including it in their remit against fake news and misinformation. This action has been taken as part of Silicon Valley’s efforts to work against fake news during the US midterm Congressional elections.

At the moment, this effort applies to information regarding exaggerated identification or procedural requirements concerning enrolment on the electoral rolls or casting your vote. It doesn’t yet apply to reports about conditions at the polling booths like opening hours, overcrowding or violence. Nor does this effort approach the distribution of other misinformation or propaganda to discourage enrolment and voting.

US-based Facebook end-users can use the reporting workflow to report voter-suppression posts to Facebook. This is through the use of an “Incorrect Voting Info” option that you select when reporting posted content to Facebook. Here, it will allow this kind of information to be verified by fact-checkers that are engaged by Facebook, with false content “buried” in the News Feed along with additional relevant content being supplied with the article when people discover it.

This is alongside a constant Facebook effort to detect and remove fake accounts existing on the Facebook platform along with increased political-content transparency across its advertising platforms.

As I have always said, the issue regarding misleading information that influences the election cycle can’t just be handled by social-media and advertising platforms themselves. These platforms need to work alongside the government-run electoral-oversight authorities and similar organisations that work on an international level to exchange the necessary intelligence to effectively identify and take action against electoral fraud and corruption.

Send to Kindle

How can social media keep itself socially sane?

BroadcastFacebook login page

Four Corners (ABC Australia) – Inside Facebook

iView – Click to view

Transcript

My Comments

I had just watched the Four Corners “Inside Facebook” episode on ABC TV Australia which touched on the issues and impact that Facebook was having concerning content that is made available on that platform. It was in relationship to recent questions concerning the Silicon Valley social-media and content-aggregation giants and what is their responsibility regarding content made available by their users.

I also saw the concepts that were raised in this episode coming to the fore over the past few weeks with the InfoWars conspiracy-theory site saga that was boiling over in the USA. There, concern was being raised about the vitriol that the InfoWars site was posting up especially in relationship to recent school shootings in that country. At the current time, podcast-content directories like Spotify and Apple iTunes were pulling podcasts generated by that site while

The telecast highlighted how the content moderation staff contracted by Facebook were handling questionable content like self-harm, bullying and hate speech.

For most of the time, Facebook took a content-moderation approach where the bare minimum action was required to deal with questionable content. This was because if they took a heavy-handed approach to censoring content that appeared on the platform, end-users would be drifting away from it. But recent scandals and issues like the Cambridge Analytica scandal and the allegations regarding fake news have been bringing Facebook on edge regarding this topic.

Drawing attention to and handling questionable content

At the moment, Facebook are outsourcing most of the content-moderation work to outside agencies and have been very secretive about how this is done. But the content-moderation workflow is achieved on a reactive basis in response to other Facebook users using the “report” function in the user-interface to draw their attention to questionable content.

This is very different to managing a small blog or forum which is something one person or a small number of people could do thanks to the small amount of traffic that these small Web presences could manage. Here, Facebook is having to engage these content-moderation agencies to be able to work at the large scale that they are working at.

The ability to report questionable content, especially abusive content, is compounded by a weak user-experience that is offered for reporting this kind of content. It is more so where Facebook is used on a user interface that is less than the full Web-based user experience such as some native mobile-platform apps.

This is because, in most democratic countries, social media unlike traditional broadcast media is not subject to government oversight and regulation. Nor is it subject to oversight by “press councils” like what would happen with traditional print media.

Handling content

When a moderator is faced with content that is identified as having graphic violence, they have the option to ignore the content – leave it as is on the platform, delete the content – remove it from the platform, or mark as disturbing – the content is subject to restrictions regarding who can see the content and how it is presented including a warning notice that requires the user to click on the notice before the content is shown. As well, they can notify the publisher who put up the content about the content and the action that has been done with it. In some cases, the content being “marked as disturbing” may be a method used to raise common awareness about the situation being portrayed in the content.

They also touched on dealing with visual content depicting child abuse. One of the factors raised is that the the more views that content depicting abuse multiplies the abuse factor against the victim of that incident.

As well, child-abuse content isn’t readily reported to law-enforcement authorities unless it is streamed live using Facebook’s live-video streaming function. This is because the video clip could be put up by someone at a prior time and on-shared by someone else or it could be a link to content already hosted somewhere else online. But Facebook and their content-moderating agencies engages child-safety experts as part of their moderating team to determine whether it should be reported to law enforcement (and which jurisdiction should handle it).

When facing content that depicts suicide, self-harm or similar situations, the moderating agencies treat these as high-priority situations. Here, if the content promotes this kind of self-destructive behaviour, it is deleted. On the other hand, other material is flagged as to show a “checkpoint” on the publisher’s Facebook user interface. This is where the user is invited to take advantage of mental-health resources local to them and are particular to their situation.

But it is a situation where the desperate Facebook user is posting this kind of content as a personal “cry for help” which isn’t healthy. Typically it is a way to let their social circle i.e. their family and friends know of their personal distress.

Another issue that has also been raised is the existence of underage accounts where children under 13 are operating a Facebook presence by lying about their age, But these accounts are only dealt with if a Facebook user draws attention to the existence of that account.

An advertising–driven platform

What was highlighted in the Four Corners telecast was that Facebook, like the other Silicon Valley social-media giants make most of their money out of on-site advertising. Here, the more engagement that end-users have with these social-media platforms, the more the advertising appears on the pages including the appearance of new ads which leads to more money made by the social media giant.

This is why some of the questionable content still exists on Facebook and similar platforms so as to increase engagement with these platforms. It is although most of us who use these platforms aren’t likely to actively seek this kind of content.

But this show hadn’t even touched on the concept of “brand safety” which is being raised in the advertising industry. This is the issue of where a brand’s image is likely to appear next to controversial content which could be seen as damaging to the brand’s reputation, and is a concept highly treasured by most consumer-facing brands maintaining the “friendly to family and business” image.

A very challenging task

Moderating staff will also find themselves in very mentally-challenging situations while they do this job because in a lot of cases, this kind of disturbing content can effectively play itself over and over again in their minds.

The hate speech quandary

The most contentious issue that Facebook, like the rest of the Social Web, is facing is hate speech. But what qualifies as hate speech and how obvious does it have to be before it has to be acted on? This broadcast drew attention initially to an Internet meme questioning “one’s (white) daughter falling in love with a black person” but doesn’t underscore an act of hatred. The factors that may be used as qualifiers may be the minority group, the role they are having in the accusation, the context of the message, along with the kind of pejorative terms used.

They are also underscoring the provision of a platform to host legitimate political debate. But Facebook can delete resources if a successful criminal action was taken against the publisher.

Facebook has a “shielded” content policy for highly-popular political pages, which is something similarly afforded to respected newspapers and government organisations; and such pages could be treated as if they are a “sacred cow”. Here, if there is an issue raised about the content, the complaint is taken to certain full-time content moderators employed directly by Facebook to determine what action should be taken.

A question that was raised in the context of hate speech was the successful criminal prosecution of alt-right activist Tommy Robinson for sub judice contempt of court in Leeds, UK. Here, he had used Facebook to make a live broadcast about a criminal trial in progress as part of his far-right agenda. But Twitter had taken down the offending content while Facebook didn’t act on the material. From further personal research on extant media coverage, he had committed a similar contempt-of-court offence in Canterbury, UK, thus underscoring a similar modus operandi.

A core comment that was raised about Facebook and the Social Web is that the more open the platform, the more likely one is to see inappropriate unpleasant socially-undesirable content on that platform.

But Facebook have been running a public-relations campaign regarding cleaning up its act in relation to the quality of content that exists on the platform. This is in response to the many inquiries it has been facing from governments regarding fake news, political interference, hate speech and other questionable content and practices.

Although Facebook is the common social-media platform in use, the issues draw out regarding the posting of inappropriate content also affect other social-media platforms and, to some extent, other open freely-accessible publishing platforms like YouTube. There is also the fact that these platforms can be used to link to content already hosted on other Websites like those facilitated by cheap or free Web-hosting services.

There may be some issues that I have covered in this article that may concern you or someone else using Facebook. Here are some

Australia

Lifeline

Phone: 13 11 14
http://lifeline.org.au

Beyond Blue

Phone: 1300 22 46 36
http://beyondblue.org.au

New Zealand

Lifeline

Phone: 0800 543 354

Depression Helpline

Phone: 0800 111 757

United Kingdom

Samaritans

Phone: 116 123
http://www.samaritans.org

SANELine

Phone: 0300 304 7000
http://www.sane.org.uk/support

Eire (Ireland)

Samaritans

Phone: 1850 60 90 90
http://www.samaritans.org

USA

Kristin Brooks Hope Center

Phone: 1-800-SUICIDE
http://imalive.org

National Suicide Prevention Lifeline

Phone: 1-800-273-TALK
http://www.suicidepreventionlifeline.org/

Send to Kindle

Instagram is offering a video service that competes against YouTube

Article

Instagram – now supporting IGTV and competing with YouTube

Instagram is launching its YouTube clone, IGTV, on Android in a few weeks | Android

IGTV in action

Authority

Meet Instagram’s YouTube Clone: IGTV | Gizmodo Australia

Here’s IGTV: Instagram’s vertical answer to YouTube | FastCompany

My Comments

There have been some recent situations where YouTube has become arrogant with how they treat end-users, content creators and advertisers thanks to their effective monopoly position for user-generated video content. One of these was a fight that Google and Amazon got into over voice-driven personal assistants and this led to Google removing YouTube support from Amazon’s Echo Show smart display. I even wrote that it is high time that YouTube faces competition in order to lift its game.

Initially Framasoft who is a French developers got working on an open-source video-distribution mechanism called “PeerTube” with a view to have it compete against YouTube.

But Instagram, owned by Facebook, have set up their own video-sharing platform called IGTV. This will be available as a separate iOS/Android mobile-platform app but also allow the clips to appear on your main Instagram user experience.

Initially this service will offer video in a vertical format for up to 1 hour long. The format is chosen to complement the fact that it is likely to be used on a smartphone or tablet that is handheld. The one-hour length will be offered to select content creators rather than to everyone while most of us will end up with 10 minutes. This may also appeal to the creation of “snackable” video content.

Currently Instagram offers video posting for 60 seconds on its main feed or 15 seconds in its Stories function. This is why I often see Stories pertaining to the same event having many videos daisy-chained.

The IGTV user experience will have you immediately begin watching video content from whoever you follow on Instagram. There will be playlist categories like “For You” (videos recommended for you), “Following” (videos from whom you follow), “Popular” (popular content) and “Continue Watching” (clips you are already working through).

The social-media aspect will allow you to like or comment on videos as well as sharing them to your friends using Instagram’s Direct mode. As well, each Instagram creator will have their own IGTV channel which will host the longer clips.

A question that can easily come up is whether Instagram will make it work for usage beyond mobile-platform viewing. This means support for horizontal aspect ratios, or viewing on other devices like snart-display devices of the Echo Home ilk, regular computers or Smart TV / set-top devices including games consoles.

It is an effort by Instagram and Facebook to compete for video viewers and creators but I see the limitation to the vertical format as being a limitation if the idea is to directly compete with YouTube. But Facebook and Instagram need to look at what YouTube isn’t offering and the platforms they have deserted in order to provide an edge over them.

Send to Kindle

Google and Facebook are starting to bring accountability to political advertising

Articles

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote without undue influence? (Courtesy of Australian Electoral Commission)

Facebook announces major changes to political ad policies | NBC News

Facebook reveals new political ad policies in wake of U.S. election | VentureBeat

What Can and Can’t You Do with Political Advertising on Facebook? | Spatially

Google Joins Facebook In Banning All Ads Related To Ireland’s Big Abortion Vote | Gizmodo

From the horse’s mouth

Facebook

Update on Our Advertising Transparency and Authenticity Efforts {Press Release)

Facebook will not be accepting referendum related ads from advertisers based outside of Ireland {Press Release)

Google

Supporting election integrity through greater advertising transparency (Blog Post)

My Comments

Over the last five months, a strong conversation has risen surrounding electioneering and political advertising on the online platforms including social media and online advertising.

The trends concerning this activity is that the political advertising spend is moving away from traditional print and broadcast media towards online media as we make more use of highly-portable computing devices to consume our information and entertainment.

Issues that have also been raised include the use of fake comments and pre-programmed auto-responding “bots” as part of political campaigns. This is alongside the rise of very divisive political campaigns during the 2016 Brexit and US Presidential election cycles that played on racial and religious prejudices. There is also the fact that nation states with improper intentions are seeing the idea of poisoning the information flow as another weapon in their cyber-warfare arsenal.

It has also been facilitated through the use of highly-focused data-driven campaign-targeting techniques based on factors like race, gender, location and interests, with this practice being highlighted in the Cambridge Analytica saga that caught up Facebook and Twitter.

As well, the online advertising and social media platforms have made it easy to create and maintain an advertising or editorial campaign that transcends jurisdictional borders. This is compared to traditional media that would be dependent on having the advertising material pass muster with the media outlet’s advertising staff in the outlet’s market before it hits the presses or the airwaves.

This issue will become more real with the use of addressable TV advertising which is currently practised with some advertising-based video-on-demand services and some cable-TV platforms but will become the norm with traditional linear TV being delivered through through the increasing use of interactive-TV platforms.

This technology would facilitate “hyper-targeting” of political campaigns such as municipal-level or postcode/ZIP-code targeting yet maintain the same “air of legitimacy” that the traditional TV experience provides, making it feasible to destabilise elections and civil discourse on the local-government level.

Election-oversight authorities in the various jurisdictions like the Australian Electoral Commission or the UK’s Electoral Commission have been doing battle with the online trend because most of the legislation and regulation surrounding political and election activities has been “set in stone” before the rise of the Internet. For example, in most jurisdictions, you will see or hear a disclosure tag after a political advertisement stating which organisation or individual was behind that ad. Or there will be financial reporting and auditing requirements for the election campaigns that take place before the polls.

Facebook and Google are having to face these realities through the use of updated advertising-platform policies which govern political advertising, But Facebook applies this to candidate-based campaigns and issues-based campaigns while Google applies this to candidate-based campaigns only at the time of writing.

Firstly there is a prohibition on political advertising from entities foreign to the jurisdiction that the ad is targeted for. This is in line with legislation and regulation implemented by most jurisdictions proscribing foreign donations to political campaigns affecting that jurisdiction.

This is augmented through a requirement for political advertisers to furnish proof of identity and residence in the targeted jurisdiction. In the case of Facebook, they apply this policy to pages and profiles with very large followings as well as ads. Similarly, they implement a postcard-based proof-of-residence procedure where they send a postcard by snail mail to the user’s US-based home / business address to very presence in the USA.

Facebook augments this requirement by using artificial-intelligence to flag if an ad is political or not, so they can make sure that the advertiser is complying with the requirements for political advertising on this platform.

Like with traditional media, political ads on both these platforms will be required to have a disclosure tag. But Facebook goes further by making this a hyperlink that end-users can click on to see details like verification documents, why the viewer saw the ad along with a link to the sponsoring organisation’s Facebook Page. This has more utility than the slide shown at the end of a TV or online ad, the voice-announcement at the end of a radio ad or small text at the bottom of a print-media ad or billboard poster which most of these tags represent.

Both of the Internet titans will also make sure details about these campaigns are available and transparent to end-users so they know what is going on. For example, Facebook requires advertisers to maintain a Facebook Page before they buy advertising on any of the Facebook-owned platforms. This will have a “View Ads” tab which includes details about targeting of each current and prior campaign with a four-year archive allowance.

Google has taken things further by making sure that political organisations, politicians, the media and journalists are aware of the resources they have to assure data security for their campaigns and other efforts. Here, they have prepared a “Protect Your Election” Webpage that highlights the resources that they provide that are relevant for each kind of player in a political campaign. This includes Project Shield to protect Websites against distributed denial-of-service attacks, along with enhanced security measures available to operators of Google Accounts associated with critical data.

Both companies have been implementing these procedures for North America with Facebook trying them out in Canada then “cementing” them in to the USA before the midterm Congress election cycle there. Both companies then took action to suspend political ads from foreign entities outside Ireland during the election cycle for the Eighth Amendment abortion referendum taking place in that country. Here, they have applied the prohibition until the close of polls on May 25 2018. Let’s not forget that these standards will be gradually rolled out in to other jurisdictions over time.

But what I would like to see is for companies who run online advertising and social-media activity to liaise strongly with election-oversight officials in the various jurisdictions especially if it affects a currently-running poll or one that is to take place in the near future. This is in order to advise these officials of any irregularities that are taking place with political advertising on their online platforms or for the officials to notify them about issues or threats that can manifest through the advertising process.

 

Send to Kindle

It is time for YouTube to face competition

Amazon Echo Show in kitchen press picture courtesy of Amazon

Google not allowing Amazon to provide a native client tor the popular YouTube service on the Echo Show highlights how much control they have over the user-generated video market

Over the last many years, YouTube established a name for itself regarding the delivery of user-generated video content through our computers. This included video created by ordinary householders ranging from the many puppy and kitten videos through to personal video travelogues. But a lot of professional video creators have used it to run showreels or simply host their regular content such as corporate videos and film trailers, with some TV channels even hosting shows for a long time on it.

After Google took over YouTube, there have been concerns about its availability across platforms other than the Web. One of the first instances that occurred was for Apple to be told to drop their native YouTube client from iOS with users having to install a Google-developed native client for this service on their iOS devices.

Recently, Google pulled YouTube from Amazon’s Echo Show device ostensibly due to it not having a good-enough user interface. But it is really down to Google wanting to integrate YouTube playback in to their Google Home and Chromecast platforms with the idea of running it as a feature exclusive to those voice-driven home assistant platforms.

YouTube Keyboard Cat

Could the Web be the only surefire place to see Keyboard Cat?

These instances can affect whether you will be able to view YouTube videos on your Smart TV, set-top box, games console, screen-equipped smart speaker or similar device. It will also affect whether a company who designs one of these devices can integrate YouTube functionality in to these devices in a native form or improve on this functionality through the device’s lifecycle. The concern will become stronger if the device or platform is intended to directly compete with something Google offers.

There are some video services like Vimeo and Dailymotion that offer support for user-generated and other video content. But these are services that are focused towards businesses or professionals who want to host video content and convey a level of uninterrupted concentration. This can be a limitation for small-time operators such as bloggers and community organisations who want to get their feet wet with video.

Facebook is starting to provide some form of competition in the form of their Watch service but this will require users to have presence on the Facebook social network, something that may not be desirable amongst some people. Amazon have opened up their Prime streaming-video platform to all sorts of video publishers and creators, positioning it as Amazon Video Direct. But this will require users to be part of the Amazon Prime platform.

But for people who publish to consumer-focused video services like YouTube, competition will require them to put content on all the services. For small-time video publishers who are focusing on video content, this will involve uploading to different platforms for a wider reach. On the other hand, one may have to use a video-distribution platform which allows for “upload once, deliver many” operation.

Competition could open up multiple options for publishers, equipment / platform designers, and end-users. For example, it could open up monetisation options for publishers’ works, simplify proper dealing with copyrighted works used within videos, open up native-client access for more platforms, amongst other things.

But there has to be enough competition to keep the market sustainable and each of the platforms must be able to support the ability to view a video without the user being required to create an account beforehand. The market should also support the existence of niche providers so as to cater to particular publishers’ and viewers needs.

In conclusion, competition could make it harder for YouTube to effectively “own” the user-generated consumer video market and control how this market operates including what devices the content appears on.

Send to Kindle

Facebook videos can be thrown from your mobile device to the big screen

Article

AirPlay devices discovered by iPad

Facebook videos can be directed to that Apple TV or Chromecast device

Now you can stream Facebook video on your TV | Mashable

My Comments

You are flicking through what your friends have posted up on Facebook and have come across that interesting video one of them put up from their trip or family event. But you would like to give it the “big screen” treatment by showing it on the large TV in the lounge so everyone can watch.

Now you will be able to with the Facebook native apps for the iOS and Android mobile platforms. Here, you can “throw” the video to a TV that is connected to an Apple TV or Chromecast / Google Cast device on the same home network as your mobile device. This will apply to videos offered by your Friends and Pages that you follow including any of the Facebook Live content that is made available.

A frame from a Facebook video that could be given the big-screen treatment

A frame from a Facebook video that could be given the big-screen treatment

Here, when you see the Facebook video on the latest iteration of your Facebook native client, you will see a TV icon beside the transport controls for the video. When you tap that icon, you will see a list of the Apple TV or Chromecast devices on your network that you can “throw” the video to. Once you select the device you want to stream the video to, then it will appear on the TV.

Facebook also values the idea of you being able to continue browsing the social network while the video plays, something that can be useful for following comments left regarding that videoclip.

Apple TV 4th Generation press picture courtesy of Apple

One of these devices could take Facebook on your iPhone further

The article also reckoned that Facebook exploiting Google Cast and Apple AirPlay rather than creating native apps for the Android TV and Apple TV platforms is a cheaper option. But I also see it as an advantage because you don’t need to support multiple sign-ons which both platforms would require thanks to the large-screen TV being used by many people.

A good question to raise is whether you could do this same activity with photos that have been uploaded to Facebook. This is because, from my experience with Facebook, people who are travelling tend to press their presence on this social network and Instagram, its stablemate, in to service as an always-updated travelogue during the journey by uploading some impressive images from their travels. Here, you may want to show these images from these collections on that big screen in a manner that does them justice.

At least Facebook are making efforts to exploit the big screen in the lounge by using Apple TV and Google Cast technology as a way to throw videos and Facebook Live activity to it.

Send to Kindle

Facebook Messenger goes native on Windows 10 desktop at last

Article

Facebook finally brings Messenger and Instagram apps to Windows 10 | CNet

Facebook Messenger for Windows 10 PC now live in the Windows Store | Windows Central

From the horse’s mouth

Facebook

Press Release

Windows Store link

My Comments

Facebook Messenger Windows 10 native client

Facebook Messenger – now native on Windows 10

Previously, I wrote about why desktop operating systems need to be supported with native-client apps for messaging platforms. Here I highlighted how the likes of ICQ, AOL Instant Messenger and Skype started off in the “regular-computer” / desktop operating system sphere and when the smartphones came on the scene, newer messaging platforms ended up being based on iOS and Android mobile platforms first.

Facebook Messenger Windows 10 live tile

Facebook Messenger live tile – now a message waiting indicator

The advantages that I highlighted included a stable client program that works tightly with the operating system; and the ability to work tightly with the operating system’s file-system. security and user-experience features extracting the maximum benefit from the user experience.

Now Facebook have answered this goal by providing a native client for Microsoft Windows 10 users, especially those of us using regular computers running this operating system.

Facebook Messenger Live Tile - Tablet mode

Facebook Messenger Live Tile – Tablet mode

This program ticks the boxes for a native client app by using its Notification Center to show incoming messages and chats; along with the ability to show messages as a Live Tile on your Start Menu. There is the ability to upload photos, videos and GIFs from your computer’s file system, which can be a bonus when you have downloaded your pictures from your good digital camera and worked on them using a good image-editing tool.

Of course, you have the features associated with your iOS-based or Android-based Facebook Messenger experience such as knowing when your correspondents are “up-to-date” with the conversation. As well, you have that similarly uncluttered experience which makes it easy to navigate your chats while it doesn’t take up much room on your screen when it is in the default windowed state.

Send to Kindle

Facebook launches a “Safety Check” program for use during emergencies

Facebook Safety Check iPhone notification screenshot courtesy Facebook

Facebook Safety Check notification on iPhone

Articles

SAFETY CHECK: Facebook Tool Simplifies Users’ Communication During Disasters, Crisis Situations | AllFacebook

Facebook’s new Safety Check lets you tell friends you’re safe when disaster strikes | NakedSecurity (Sophos)

From the horse’s mouth

Facebook

Introducing Safety Check (Press Release)

Feature Description

My Comments

Facebook has just released a system which works during natural disasters or other civil emergencies to allow people to be sure that those friends of theirs who are in the affected areas are OK. This system, known as Safety Check, was born out of a “notice board that Facebook built in to their system during the 2011 Japanese earthquake and tsunami. It would still complement other methods like phoning or texting those you know in the affected areas.

If an emergency happens, this would affect a known geographical area and Facebook would determine if you or your friends are in that area or not. Typically, this would be brought on by emergency services and the media advising Facebook of these situations. This would be based on the City data in your Profile or rough-gauging where you are interacting with it from. It would also use the Last Location details if you opt in to and implement the “Nearby Friends” app as another metric.

Facebook Safety Check dashboard screenshots (regular computer and mobile) courtesy Facebook

Facebook Safety Check dashboard- regular and mobile (handheld) views

Here, you will have a notification that will pop up if you are in the affected area and you mark this as “I’m Safe” if you are OK and safe, or mark as being “Out Of Area” if Facebook miscalculates your location and determines that you are in that area when you are are not in that area. The latter situation can happen for people who are in a large metropolitan area or conurbation and the disaster or crisis situation only affects a small part of that area.

This status will show up to your Friends as a Notification and in their News Feed to reassure them.This is augmented by a special “dashboard” page created for the emergency that shows a filtered list of your friends who are in the area affected by the crisis so you cab know who has “called in” to say they are OK,

This same setup also benefits those of us who are outside the affected areas and want to simply be sure that none of our friends have been affected by that crisis. Here, we receive the News Feeds and Notifications about our Friends who have “checked in” as being safe or out of the affected area and can also see this on that same “dashboard” page.

As for the privacy issue, these updates are only visible to those people who are currently your Facebook Friends when it comes to “coarse” coverage and to those of us who have reciprocally enabled the “Nearby Friends” functionality on Facebook for each other.

Although Facebook is the dominant consumer-facing social network and is able to achieve this goal, various other messaging and social-network services could learn form this setup to allow “at-a-glance” notification of our loved ones’ welfare during natural disasters and other crises.

Send to Kindle

Facebook Events–a new vector for distributing spam

Facebook event spam notification in Notifications list - comes from a Friend

Facebook event spam notification in Notifications list – comes from a Friend

Article

Spammers Using Facebook Events to Trick Users | ReadWrite

My Comments

Ever since its early days, scammers have used Facebook as a place to spam users with their shady schemes. Previously this was through running a message with a tantalising link surrounded by tantalising text on users’ Walls and this link would pass through to some unscrupulous site.

This has failed to work now that Facebook has achieved critical mass with users subscribing to different Groups, Pages and Personal Profiles including those that represent their interests. This situation leads to the News Feed, the user’s default view in Facebook, being full of various pieces of information from different sources.

But, over the years, Facebook introduced a notifications mechanism for events beyond potential Friend requests or comments left on a Status Update and users are more likely to check on what has been added to the Notifications list. Here, it also introduced the Event which a Facebook user can invite their Friends or Followers to depending on its settings and this allows the user to register whether they are attending or not.

Event page for spammy Facebook event

Event page for spammy Facebook event

This bas become a new path for distributing link-bait spam because these Events don’t come often in a user’s interaction with Facebook. Similarly, the default setup has it that Facebook treats the Events as something to generate a Notification about and it effectively shows up the red “Notifications” flag in the Web view while causing native clients to show a distinct alert message and audio prompt when these come in. For example, the mobile clients for iOS and Android would list the event in the mobile operating system’s Notifications tray while causing the phone to sound a distinct ringtone or the Facebook Windows clients will “pop up” a message on the Desktop with your computer sounding an audible chime.

As well, if you “accept” these Events, they will appear as a Status Update on your Wall (Timeline). Of course, it will require the user to click through to the Event page and this will show a URL for you to click through to for more details, most likely along with some tantalising pictures. These URLs are where the trouble occurs because it could lead to installation of malware on your computer or other questionable practices taking place and some of these URLs are infact obfuscated using URL-shortening services like bit.ly .

If these “event spam” notifications come from one of your Facebook Friends, don’t click on anything to do with the Event page. Rather, let your friend know that they are the victim of a spammer and suggest they change the password on their Facebook account and run a malware scan on their computer.

Send to Kindle