Category: Social Web

Alternative “free-speech”social networks are coming to the fore

Article Parler login page screenshot

‘Free speech’ social networks claim post-election surge | Engadget

My Comments

A new breed of social networks are becoming popular with user groups who see Facehook, Twitter, YouTube and co as being equivalent to mainstream media, especially the popular TV channels.

These networks are based in North America, yet outside Silicon Valley. This means that they don’t subscribe to the perceived groupthink associated with Silicon Valley / Northern Californian culture.

As well, they came to the fore in response to Facebook, Twitter and Google responding to the issues of fake news and disinformation with these online companies implementing fact-checking mechanisms and flagging questionable material. Let’s not forget that there is social and business pressure on the established social media companies to clamp down on racial and similar hatred. That meant they had to implement robust user and content management policies to manage what appeared on these sites which is something that can be very difficult with large networks.

Previously the only way to offer content that isn’t controlled by the social media establishment was to set up and run a blog or forum. This required a fair bit of technical knowhow and you had to also have a domain name and a business relationship with a Web hosting service. Then you would have to run something like WordPress, phpBB or vBulletin so you can concentrate on running the blog or forum.

It also included exposing your content amongst your desired audience which may require you to use established social networks but use codified dog-whistle language in your posts. As well, you would have to make sure that your presence on the established social networks exists simply to draw traffic to your site. To “make it pay”, you would have to set up a shopfront on the Website to sell merchandise, offer advertising space typically to small businesses, or even run the site on a freemium model with a subscription-driven membership system.

The networks are Parler and MeWe which also have iOS and Android native clients available through Apple’s and Google’s mobile app stores. Gab also exists but Apple and Google won’t admit native clients for this service to their app stores. Rumble offers a video-focused service that works similarly to YouTube, with this service being cross-promoted on Parler, MeWe and Gab.

These alternative social networks implement business models that are less dependent on advertising like subscription-driven “freemium” setups. Along with that, the networks adopt “light-touch” policies regarding the management of users and the content they share, with them billing themselves as “free-speech” alternatives.

There has been strong interest in these networks over the past year which has been highlighted in the number of accounts created and the number of native mobile clients downloaded from the mobile-platform app stores. This is due to the USA’s knife-edge Presidential election and the COVID-19 coronavirus plague and there are some people wanting to seek out information that isn’t “fit for television” i.e. accepted by traditional media and the main Silicon Valley social networks.

Unlike previous alternative-media setups like community broadcasting, small-scale newspapers, computer bulletin boards and the early days of the Internet, these networks are gaining a strong following amongst the hard right including conspiracy theorists and Trump loyalists. There is even interest amongst the USA’s Republican Party to shift towards these services as a way to move from what they see as the “left-leaning media establishment”, something that is symptomatic of how hyper-partisan the US has become.

A question that will be raised is how large these networks’ user bases will be in a few years’ time after the dust settles on Donald Trump and the COVID-19 pandemic. But I see them and newer alternative social networks maintaining their position especially for those who relate with others that have opinions or follow topics that are “against the grain”.

Send to Kindle

WhatsApp to allow users to search the Web regarding content in their messages

WhatsApp Search The Web infographic courtesy of WhatsApp

WhatsApp to allow you to search the Web for text related to viral messages posted on that instant messaging app

Article

WhatsApp Pilots ‘Search the Web’ Tool for Fact-Checking Forwarded Messages | Gizmodo Australia

From the horse’s mouth

WhatsApp

Search The Web (blog post)

My Comments

WhatsApp is taking action to highlight the fact that fake news and disinformation don’t just get passed through the Social Web. Here, they are highlighting the use of instant messaging and, to some extent, email as a vector for this kind of traffic which has been as old as the World Wide Web.

They have improved on their previous efforts regarding this kind of traffic initially by using a “double-arrow” icon on the left of messages that have been forwarded five or more times.

But now they are trialling an option to allow users to Google the contents of a forwarded message to check their veracity. One of the ways to check a news item’s veracity is whether one or more news publishers or broadcasters that you trust are covering this story and what kind of light they are shining on it.

Here, the function manifests as a magnifying-glass icon that conditionally appears near forwarded messages. If you click or tap on this icon, you start a browser session that shows the results of a pre-constructed Google-search Weblink created by WhatsApp. It avoids the need to copy then paste the contents of a forwarded message from WhatsApp to your favourite browser running your favourite search engine or to the Google app’s search box. This is something that can be very difficult with mobile devices.

But does this function break end-to-end encryption that WhatsApp implements for the conversations? No, because it works on the cleartext that you see on your screen and is simply creating the specially-crafted Google-search Weblink that is passed to whatever software handles Weblinks by default.

An initial pilot run is being made available in Italy, Brazil, Ireland (Eire), UK, Mexico, Spain and the USA. It will be part of the iOS and Android native clients and the messaging service’s Web client.

WhatsApp could evolve this function further by allowing the user to use different search engines like Bing or DuckDuckGo. But they would have to know of any platform-specific syntax requirements for each of these platforms and it may be a feature that would have to be rolled out in a piecemeal fashion.

They could offer the “search the Web” function as something that can be done for any message, rather than only for forwarded messages. I see it as being relevant for people who use the group-chatting functionality that WhatsApp offers because people can use a group chat as a place to post that rant that has a link to a Web resource of question. Or you may have a relative or friend who simply posts questionable information as part of their conversation with you.

At least WhatsApp are adding features to their chat platform’s client software to make it easer to put the brakes on disinformation spreading through it. This could he something that could be investigated by other instant-messaging platforms including SMS/MMS text clients.

Send to Kindle

A digital watermark to identify the authenticity of news photos

Articles

ABC News 24 coronavirus coverage

The news services that appear on the “screen of respect” that is main TV screen like the ABC are often seen as being “of respect” and all the screen text is part of their identity

TNI steps up fight against disinformation  | Advanced Television

News outlets will digitally watermark content to limit misinformation | Engadget

News Organizations Will Start Using Digital Watermarks To Combat Fake News |Ubergizmo

My Comments

The Trusted New Initiative are a recently formed group of global news and tech organisations, mostly household names in these fields, who are working together to stop the spread of disinformation where it poses a risk of real-world harm. It also includes flagging misinformation that undermines trust the the TNI’s partner news providers like the BBC. Here, the online platforms can review the content that comes in, perhaps red-flagging questionable content, and newsrooms avoid blindly republishing it.

ABC News website

.. as well as their online presence – they will benefit from having their imagery authenticated by a TNI watermark

One of their efforts is to agree on and establish an early-warning system to combat the spread of fake news and disinformation. It is being established in the months leading up to the polling day for the US Presidential Election 2020 and is flagging disinformation were there is an immediate threat to life or election integrity.

It is based on efforts to tackle disinformation associated with the 2019 UK general election, the Taiwan 2020 general election, and the COVID-19 coronavirus plague.

Another tactic is Project Origin, which this article is primarily about.

An issue often associated with fake news and disinformation is the use of imagery and graphics to make the news look credible and from a trusted source.

Typically this involves altered or synthesised images and vision that is overlaid with the logos and other trade dress associated with BBC, CNN or another newsroom of respect. This conveys to people who view this online or on TV that the news is for real and is from a respected source.

Project Origin is about creating a watermark for imagery and vision that comes from a particular authentic content creator. This will degrade whenever the content is manipulated. It will be based around open standards overseen by TNI that relate to authenticating visual content thus avoiding the need to reinvent the wheel when it comes to developing any software for this to work.

One question I would have is whether it is only readable by computer equipment or if there is a human-visible element like the so-called logo “bug” that appears in the corner of video content you see on TV. If this is machine-readable only, will there be the ability for a news publisher or broadcaster to overlay a graphic or message that states the authenticity at the point of publication. Similarly, would a Web browser or native client for an online service have extra logic to indicate the authenticity of an image or video footage.

I would also like to see the ability to indicate the date of the actual image or footage being part of the watermark. This is because some fake news tends to be corroborated with older lookalike imagery like crowd footage from a similar but prior event to convince the viewer. Some of us may also look at the idea of embedding the actual or approximate location of the image or footage in the watermark.

There is also the issue of newsrooms importing images and footage from other sources whose equipment they don’t have control over. For example, an increasing amount of amateur and videosurveillance imagery is used in the news usually because the amateur photographer or the videosurveillance setup has the “first images” of the news event. Then there is reliance on stock-image libraries and image archives for extra or historical footage; along with newsrooms and news / PR agencies sharing imagery with each other. Let’s not forget media companies who engage “stringers” (freelance photographers and videographers) who supply images and vision taken with their own equipment.

The question with all this, especially with amateur / videosurveillance / stringer footage taken with equipment that media organisations don’t have control over is how such imagery can be authenticated by a newsroom. This is more so where the image just came off a source like someone’s smartphone or the DVR equipment within a premises’ security room. There is also the factor that one source could tender the same imagery to multiple media outlets, whether through a media-relations team or simply offering it around.

At least Project Origin will be useful as a method to allow the audience to know the authenticity and provenance of imagery that is purported to corroborate a newsworthy event.

Send to Kindle

What can be done about taming political rhetoric on online services?

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Online services may have to observe similar rules to traditional media and postal services when it comes to handling election and referendum campaigns

There’s a simple way to reduce extreme political rhetoric on Facebook and Twitter | FastCompany

My Comments

In this day and age, a key issue that is being raised regarding the management of elections and referenda is the existence of extreme political rhetoric on social media and other online services.

But the main cause of this problem is the algorithmic nature associated with most online services. This can affect what appears in a user’s default news feed when they start a Facebook, Twitter or Instagram session; whether a bulk-distributed email ends up in the user’s email inbox or spam folder; whether the advertising associated with a campaign appears in search-driven or display online advertising; or if the link appears on the first page of a search-engine user experience.

This is compared to what happens with traditional media or postal services while there is an election or referendum. In most of the democracies around the world, there are regulations overseen by the electoral-oversight, broadcasting and postal authorities regarding equal access to airtime, media space and the postal system by candidates or political parties in an election or organisations defending each option available in a referendum. If the medium or platform isn’t regulated by the government such as what happens with out-of-home advertising or print media, the peak bodies associated with that space establish equal lowest-cost access to these platforms through various policies.

Examples of this include an equal number of TV or radio commercial spots made available at the cheapest advertising rate for candidates or political parties contesting a poll, including the same level of access to prime-time advertising spaces; scheduled broadcast debates or policy statements on free-to-air TV with equal access for candidates; or the postal service guaranteeing priority throughput of election matter for each contestant at the same low cost.

These regulations or policies are to make it hard for a candidate, political party or similar organisation to “game” the system but allow voters to make an informed choice about whom or what they vote for. But the algorithmic approach associated with the online services doesn’t guarantee the candidates equal access to the voters’ eyeballs thus requiring the creation of incendiary content that can go viral and be shared amongst many people.

What needs to happen is that online services have to establish a set of policies regarding advertising and editorial content tendered by candidates, political parties and allied organisations in order to guarantee equal delivery of the content.  This means marking such content so as to gain equal rotation in an online-advertising platform; using “override markers” that provide guaranteed recorded delivery of election matter to one’s email inbox or masking interaction details associated with election matter posted on a Facebook news feed.

But the most important requirement is that the online platforms cannot censor or interfere with the editorial content of the message that is being delivered to the voters by them. It is being seen as important especially in a hyper-partisan USA where it is perceived by conservative thinkers that Silicon Valley is imposing Northern-Californian / Bay-Area values upon people who use or publish through their online services.

A question that can easily crop up is the delivery of election matter beyond the jurisdiction that is affected by the poll. Internet-based platforms can make this very feasible and it may be considered of importance for, say, a country’s expats who want to cast their vote in their homeland’s elections. But people who don’t live within or have ties to the affected jurisdiction may see it as material of little value if there is a requirement to provide electoral material beyond a jurisdiction’s borders. This could be answered through social-media and email users, or online publishers having configurable options to receive and show material from multiple jurisdictions rather than the end-user’s current jurisdiction.

What is being realised here is that online services will need to take a leaf out of traditional regulated media and communication’s playbook to guarantee election candidates’ fair equal access to the voters through these platforms.

Send to Kindle

Facebook now offers a way to turn off political ads on its main platforms

Article Facebook login page

Don’t want political ads in your Facebook or Instagram feed? You’ll be able to turn that off | CNet

From the horse’s mouth

Facebook

Launching The Largest Voting Information Effort in US History (Press Release)

Videos

Control Political Ad Content on Facebook (Click or tap to play)

Control Political Ad Content on Instagram (Click or tap to play)

My Comments

Facebook is introducing a feature that allows its users to effectively “mute” political advertising including issues-driven advertising on their main social-Web platform as well as Instagram.

This feature will be available to USA-based accounts as part of Facebook’s voter-information features for the 2020 Presidential Elections. That includes information on how and where to register along with where and when to vote, including early-voting (pre-poll voting) and postal-voting information. It underscores Facebook’s role as part of Silicon Valley’s effort to “get out the vote” in the USA.

Personally I am not sure whether this setup will provide information relevant to American expats who have moved to other countries like how their local US embassy or consulate is facilitating their vote. It is because in most cases these expats will still have voting rights of some sort for US elections.

The option will be available in the “Ad Preferences” option for your platform’s user-account settings on both Facebook and Instagram. Or both platforms will have a contextual option, highlighted under a stylised “i”, available for political ads allowing you to see fewer ads of this type, This can be set up using your Web-based user experience or the official native mobile-platform apps that you use for working these platforms with.

Of course, there won’t be the ability to regulate editorial content from media organisations that is posted or shared through Facebook or Instagram. This will be an issue when you deal with media outlets that have a highly-partisan editorial policy. Nor will there be the ability to control posts, shares and comments from Pages and Profiles that aren’t shared as a paid advertisement.

There may also be questions about whether your favourite politician’s, political party’s or civic-society organiation’s Facebook or Instagram traffic will appear in your platform’s main view especially if they pay to increase viewership of these posts. It can be of concern for those of us who have a strong role in political and civic society and see the Facebook traffic as a “news-ticker” for the political entities we engage with.

Facebook has an intent to roll this feature out to other countries where they have established systems for managing and monitoring political advertising on their platforms. At least they are the first online ad platform that allows users to have control over the political and issue advertising that they see while they use that platform.

Send to Kindle

Keeping the same character within your online community

Article

Facebook login page

Online communities do represent a lot of hard work and continuous effort including having many moderators

General Election 2019: Has your local Facebook group been hijacked by politics? | BBC News

My Comments

The past UK General Election highlighted an issue with the management of online communities, especially those that are targeted at neighbourhoods.

In the BBC News article, a local Facebook group that was used by a neighbourhood specifically for sharing advice, recommending businesses, advertising local events, “lost-and-found” and similar purposes was steered from this purpose to a political discussion board.

You may or may not think that politics should have something to do with your neighbourhood but ordinarily, it stays very well clear. That is unless you are dealing with a locally-focused issue like the availability of publicly-funded services like healthcare, education or transport infrastructure in your neighbourhood. Or it could be about a property development that is before the local council that could affect your neighbourhood.

How that came about was that it was managed by a single older person who had passed away. Due to the loss of an administrator, the group effectively became a headless “zombie” group where there was no oversight over what was being posted.

That happened as the UK general election was around the corner with the politics “heating up” especially as the affected neighbourhood was in a marginal electorate. Here, the neighbourhood newsgroup “lost it” when it came to political content with the acrimony heating up after the close of polls. The site administrator’s widow even stated that the online group was being hijacked by others pushing their own agendas.

Subsequently, several members of that neighbourhood online forum stepped in to effectively wrest control and restore sanity to it. This included laying down rules against online bullying and hate speech along with encouraging proper decent courtesy on the bulletin board. It became hard to effectively steer back the forum to that sense of normalcy due to pushback by some members of the group and the established activity that occurred during the power vacuum.

This kind of behaviour, like all other misbehaviour facilitated through the Social Web and other Internet platforms, exploits the perceived distance that the Internet offers. It is something you wouldn’t do to someone face-to-face.

What was being identified was that there was a loss of effective management power for that online group due to the absence of a leader which maintained the group’s character and no-one effectively steps up to fill the void. This can easily happen with any form of online forum or bulletin board including an impromptu “group chat” set up on a platform like WhatsApp, Facebook Messenger or Viber.

It is like a real-life situation with an organisation like a family business where people have put in the hard yards to maintain a particular character. Then they lose the effective control of that organisation and no-one steps up to the plate to maintain that same character. This kind of situation can occur if there isn’t continual thought about succession planning in that organisation’s management especially if there aren’t any young people in the organisation who are loyal to its character and vision.

An online forum should have the ability and be encouraged to have multiple moderators with the same vision so others can “take over” if one isn’t able to adequately continue the job anymore. Here, you can discover and encourage potential moderators through their active participation online and in any offline events. But you would need to have some people who have some sort of computer and Internet literacy as moderators so they know their way around the system or require very minimal training.

The multiplicity of moderators can cater towards unforseen situations like death or sudden resignation. It also can assure that one of the moderators can travel without needing to have their “finger on the pulse” with that online community. In the same vein, if they or one of their loved ones falls ill or there is a personal calamity, they can concentrate on their own or their loved one’s recovery and rehabilitation or managing their situation.

There will be a reality that if a person moves out of a neighbourhood in good faith, they will have maintained regular contact with their former neighbours. Here they would be trying to keep their “finger on the pulse” regarding the neighbourhood’s character.  This fact can be exploited with managing a neighbourhood-focused online community by them being maintained as a “standby moderator” where they can be “roped in” to moderate the online community if there are too few moderators.

To keep the same kind of “vibe” within that online community that you manage will require many hands at the pump. It is not just a one-person affair.

Send to Kindle

WhatsApp now highlights messaging services as a fake-news vector

Articles

WhatsApp debuts fact-checking service to counter fake news in India | Engadget

India: WhatsApp launches fact-check service to fight fake news | Al Jazeera

From the horse’s mouth

WhatsApp

Tips to help prevent the spread of rumors and fake news {User Advice)

Video – Click or tap to play

My Comments

As old as the World-Wide-Web has been, email has been used as a way to share online news amongst people in your social circle.

Typically this has shown up in the form of jokes, articles and the like appearing in your email inbox from friends, colleagues or relatives, sometimes with these articles forwarded on from someone else. It also has been simplified through the ability to add multiple contacts from your contact list to the “To”, “Cc” or “Bcc” fields in the email form or create contact lists or “virtual contacts” from multiple contacts.

The various instant-messaging platforms have also become a vector to share links to articles hosted somewhere on the Internet in the same manner as email, as has the carrier-based SMS and MMS texting platforms when used with a smartphone.

But the concern raised about the distribution of misinformation and fake news has been focused on the popular social media and image / video sharing platforms. This is while fake news and misinformation creep in to your Inbox or instant-messaging client thanks to one or more of your friends who like passing on this kind of information.

WhatsApp, a secure instant-messaging platform owned by Facebook, is starting to tackle this issue head-on with its Indian userbase as that country enters its election cycle for the main general elections. They are picking up on the issue of fake news and misinformation thanks to the Facebook Group being brought in to the public limelight due to this issue. As well, Facebook have been recently clamping down on inauthentic behaviour that was targeting India and Pakistan.

WhatsApp now highlighting fake news problem in India, especially as this platform is seen as a popular instant-messenger within that country. They are working with a local fact-checking startup called Proto to create the Checkpoint Tipline to allow users to have links that are sent to them verified. It is driven on the base of a “virtual contact” that the WhatsApp users forward questionable links or imagery to.

But due to the nature of its end-to-end encryption and the fact that service is purely a messaging service, there isn’t the ability to verify or highlight questionable content. But they also have placed limits on the number of users one can broadcast a message to in order to tame the spread of rumours.

It is also being used as a tool to identify the level of fake news and misinformation taking place on the messenger platform and to see how much of a vector these platforms are.

Personally, I would like to see the various fact-checking agencies have an email mailbox where you can forward emails with questionable links and imagery to so they can verify that rumour mail doing the rounds. It could operate in a similar vein to how the banks, tax offices and the like have set up mailboxes for people to forward phishing email to so these organisations can be aware of the phishing problem they are facing.

The only problem with this kind of service is that people who are astute and savvy are more likely to use it. This may not affect those of us who just end up passing on whatever comes our way.

Send to Kindle

Are we at an era where the smartphone is the new “idiot box”?

The TV era TV, VHS videocassette recorder and rented video movies

From the late 1960s through to the 2000s, the television was seen by some people as a time-waster. This was aggravated through increasingly-affordable sets, the existence of 24-hour programming, a gradually-increasing number of TV channels competing for viewership, remote controls and private broadcasters including many-channel pay-TV services.

It led to an increasing number of users concerned about various idle and unhealthy TV-viewing practices. Situations that were often called out included people dwelling on poor-quality content offered on commercial free-to-air or pay-TV channels such as daytime TV;  people loafing on the couch with the remote control in their hand as they idly change channels for something to watch, known as “flicking” or channel-surfing; along with parents using the TV as an “electronic babysitter” for their children.

Even technologies like videocassette recorders or video games consoles didn’t improve things as far as the critics were concerned. One talking point raised during the early 1990s was the ubiquity and accessibility of violent video content through local video stores with this leading to imitative behaviour.

We even ended up with the TV set being referred to as an “idiot box”, “boob tube” or similar names; or people who spend a lot of time watching TV idly having “square eyes” or being “couch potatoes”. Some people even stood for “TV-free” spaces and times to encourage meaningful activity such as for example not having a set installed at a weekender home.

There was even some wellness campaigns that were tackling unhealthy TV viewing. One of these was the “Life Be In It” campaign ran by the Australian governments during the late 1970s.  This campaign was centred around a series of animated TV “public-service-announcement” commercials (YouTube – example about walking) featuring a character called “Norm”, which showed different activities one could be engaging in rather than loafing in the armchair watching TV non-stop.

The rise of the personal computer, Internet and smartphones

The 1980s saw the rise of increasingly-affordable personal-computing power on the home or business desktop with these computers gaining increasing abilities over the years. With this was the rise of games written for these computers including some “time-waster” or “guilty-pleasure” games like Solitaire or the Leisure Suit Larry games.

During the late 1990s and the 2000s, the Internet came on board and gradually offered resources to the personal computer that can compete with the TV. This was brought about with many interesting Websites coming online with some of these sites running participant forums of some form. It also had us own our own email address as a private electronic communications channel.

Also, by the mod 1990s, most Western countries had implemented deregulated competitive telecommunications markets and one of these benefits was mobile telephony service that was affordable for most people. It also led to us being able to maintain their own mobile telephone service and number, which also lead to each one of us effectively having our own private connection. This is rather than us sharing a common connection like a landline telephone number ringing a telephone installed in a common area like a kitchen or living room.

The smartphone and tablet era

USB-C connector on Samsung Galaxy S8 Plus smartphone

The smartphone is now being seen as the “new TV”

But since the late 2000s the Internet started to head down towards taking the place of TV as a centre of idle activity. This was driven through the existence of YouTube, instant messaging and social media, along with increasingly-portable computing devices especially highly-pocketable smartphones and tablets or small laptops able to be stuffed in to most right-sized hand luggage, alongside high-speed Internet service available through highly-affordable mobile-broadband services or ubiquitous Wi-Fi networks.

Issues that were underscored included people looking at their phones all day and all night to check their Facehook activity, watching YouTube clips or playing games and not talking with each other; smartphone anxiety where you have to have your phone with you at all times including bringing it to the dinner table, and the vanity associated with the social-media selfie culture. Sometimes browsing the Social Web including YouTube ended up being seen as today’s equivalent of watching the low-grade TV offerings from a private TV broadcaster. Let’s not forget how many of us have played “Candy Crush Saga” or “Angry Birds” on our smartphones as a guilty pleasure.

Apple iPad Pro 9.7 inch press picture courtesy of Apple

Or the iPad being used to browse around the Social Web and watch YouTube

This issue has come to the fore over the last few years with concepts like “digital detoxification”, an interest in Internet-free mobile-phone devices including “one-more-time” takes on late-90s / early-2000s mobile-phone designs, mobile operating systems having functionality that identifies what you are spending your time on heavily, amongst other things.

Educators are even regarding the time spent using a computing device for entertainment as the equivalent of idly watching TV entertainment and make a reference to this time as “screen time”. This is more so in the context of how our children use computing devices like tablets or smartphones.

Recently, France and the Australian State of Victoria have passed regulations to prohibit the use of smartphones by schoolchildren in government-run schools with the former proscribing it in primary and early-secondary (middle or junior high) levels; and the latter for primary and all secondary levels.

Even smartphone manufacturers have found that the technology has hit a peak with people not being interested in the latest smartphones due to them not being associated with today’s equivalent of idle TV watching. This may lead to them taking a more evolutionary approach towards smartphone design rather than heavily investing in ewer products.

What it has come down to

How I see all of this is the existence of an evolutionary cycle affecting particular forms of mass media and entertainment. It is especially where the media form allows for inanity thanks to the lack of friction involved in providing or consuming this kind of entertainment. As well, the ability for the producer, distributor or user to easily “shape” the content to make a “fairy-tale” existence where the “grass is always greener” or to pander to our base instincts can expose a media platform to question and criticism.

In some cases, there is an ethereal goal in some quarters to see the primary use of media and communications for productive or educational purposes especially of a challenging nature rather than for entertainment. It also includes reworking the time we spend on entertainment or casual communications towards something more meaningful. But we still see the lightweight entertainment and conversation more as a way to break boredom.

UPDATE: I have inserted details about France and Australia banning smartphones in schools especially in relationship to the smartphone ban announced by the Victorian State Government on 26 June 2019.

Send to Kindle

Being cautious about fake news and misinformation in Australia

Previous Coverage

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Being aware of fake news in the UK

Fact-checking now part of the online media-aggregation function

Useful Australian-based resources

ABC Fact Check – ran in conjunction with RMIT University

Political Parties

Australian Labor Party (VIC, NSW)

Liberal Party – work as a coalition with National Party (VIC, NSW)

National Party – work as a coalition with Liberal Party (VIC, NSW)

Australian Greens – state branches link from main page

One Nation (Pauline Hanson)

Katter’s Australia Party

Derryn Hinch’s Justice Party

Australian Conservatives

Liberal Democratic Party

United Australia Party

My Comments

Over the next six months, Australia will see some very critical general elections come to pass both on a federal level and in the two most-highly-populated states that host most of that country’s economic and political activity. On October 30 2018, the election writs were recently served in the state of Victoria for its general election to take place on November 24 2018. Then, on the 23 March 2019, New South Wales will expect to go to the polls for its general election. Then the whole country will expect to go to the polls for the federal general election by 18 May 2019.

As these election cycles take place over a relatively short space of time and affecting , there is a high risk that Australians could fall victim to misinformation campaigns. This can subsequently lead to state and federal ballots being cast that steer the country against the grain like what happened in 2016 with the USA voting in Donald Trump as their President and the UK voting to leave the European Union.

Google News - desktop Web view

Look for tags within Google News that describe the context of the story

The issue of fake news and misinformation is being seen as increasingly relevant as we switch away from traditional media towards social media and our smartphones, tablets and computers for our daily news consumption.  This is thanks to the use of online search and news-aggregation services like Google News; or social media like Facebook or Twitter which can be seen by most of us as an “at-a-glance” view of the news.

As well, a significant number of well-known newsrooms are becoming smaller due to the reduced circulation and ratings for their newspaper or radio / TV broadcast thanks to the use of online resources for our news. It can subsequently lead to poor-quality news reporting and presentation with a calibre equivalent to the hourly news bulletin offered by a music-focused radio station. It also leads to various mastheads plagiarising content from other newsrooms that place more value on their reporting.

The availability of low-cost or free no-questions-asked Web and video hosting along with easy-to-use Web-authoring, desktop-publishing and desktop-video platforms make it feasible for most people to create a Web site or online video channel. It has led to an increased number of Websites and video channels that yield propaganda and information that is dressed up as news but with questionable accuracy.

Another factor that has recently been raised in the context of fake news, misinformation and propaganda is the creation and use of deepfake image and audio-visual content. This is where still images, audio or video clips that are in the digital domain are altered to show a falsehood using artificial-intelligence technology in order to convince viewers that they are dealing with original audio-visual resource. The audio content can be made to mimic an actual speaker’s voice and intonation as part of creating a deepfake soundbite or video clip.

It then becomes easy to place fake news, propaganda and misinformation onto easily-accessible Web hosts including YouTube in the case of videos. Then this content would be propagated around the Internet through the likes of Twitter, Facebook or online bulletin boards. It is more so if this content supports our beliefs and enhances the so-called “filter bubble” associated with our beliefs and media use.

There is also the fact that newsrooms without the resources to rigorously scrutinise incoming news could pick this kind of content up and publish or broadcast this content. This can also be magnified with media that engages in tabloid journalism that depends on sensationalism to get the readership or keep listeners and viewers from switching away.

The borderless nature of the Internet makes it easy to set up presence in one jurisdiction to target the citizens of another jurisdiction in a manner to avoid being caught by that jurisdiction’s election-oversight, broadcast-standards or advertising-standards authority. Along with that, a significant number of jurisdictions focus their political-advertising regulation towards the traditional media platforms even though we are making more use of online platforms.

Recently, the Australian Electoral Commission along with the Department of Home Affairs, Australian Federal Police and ASIO have taken action on an Electoral Integrity Assurance Task Force. It was in advance of recent federal byelections such as the Super Saturday byelections, where there was the risk of clandestine foreign interference taking place that could affect the integrity of those polls.

But the issue I am drawing attention to here is the use of social media or other online resources to run fake-news campaigns to sway the populace’s opinion for or against certain politicians. This is exacerbated by the use of under-resourced newsrooms that could get such material seen as credible in the public’s eyes.

But most of Silicon Valley’s online platforms are taking various steps to counter fake news, propaganda and disinformation using these following steps.

Firstly, they are turning off the money-supply tap by keeping their online advertising networks away from sites or apps that spread misinformation.

They also are engaging with various fact-check organisations to identify fake news that is doing the rounds and tuning their search and trending-articles algorithms to bury this kind of content.

Autocomplete list in Google Search Web user interface

Google users can report Autocomplete suggestions that they come across in their search-engine experience/

They are also maintaining a feedback loop with their end-users by allowing them to report fake-news entries in their home page or default view. This includes search results or autocomplete entries in Google’s search-engine user interface. This is facilitated through a “report this” option that is part of the service’s user interface or help pages.

Most of the social networks and online-advertising services are also implementing robust user-account-management and system-security protocols. This includes eliminating or suspending accounts that are used for misinformation. It also includes checking the authenticity of accounts running pages or advertising campaigns that are politically-targeted through methods like street-address verification.

In the case of political content, social networks and online-advertising networks are implementing easily-accessible archives of all political advertising or material that is being published including where the material is being targeted at.

ABC FactCheck – the ABC’s fact-checking resource that is part of their newsroom

Initially these efforts are taking place within the USA but Silicon Valley is rolling them out across the world at varying timeframes and with local adaptations.

Personally, I would still like to see a strong dialogue between the various Social Web, search, online-advertising and other online platforms; and the various government and non-government entities overseeing election and campaign integrity and allied issues. This can be about oversight and standards regarding political communications in the online space along with data security for each stakeholder.

What can you do?

Look for any information that qualifies the kind of story if you are viewing a collection of headlines like a search or news-aggregation site or app. Here you pay attention to tags or other metadata like “satire”, “fact checking” or “news” that describe the context of the story or other attributes.

Most search engines and news-aggregation Websites will show up this information in their desktop or mobile user interface and are being engineered to show a richer set of details. You may find that you have to do something extra like click a “more” icon or dwell on the heading to bring up this extra detail on some user interfaces.

Trust your gut reaction to that claim being shared around social media. You may realise that a claim associated with fake news may be out of touch with reality. Sensationalised or lurid headlines are a usual giveaway, along with missing information or copy that whips up immediate emotional responses from the reader.

Check the host Website or use a search engine like Google to see if the news sources you trust do cover that story. You may come across one or more tools that identify questionable news easily, typically in the form of a plug-in or extension that works with your browser if its functionality can be expanded with these kind of add-ons. It is something that is more established with browsers that run on regular Windows, Mac or Linux computers.

It is also a good idea to check for official press releases or similar material offered “from the horse’s mouth” by the candidates, political parties, government departments or similar organisations themselves. In some cases during elections, some of the candidates may run their own Web sites or they may run a Website that links from the political party’s Website. Here, you will find them on the Websites ran by these organisations and may indicate if you are dealing with a “beat-up” or exaggeration of the facts.

As you do your online research in to a topic, make sure that you are familiar with how the URLs are represented on your browser’s address bar for the various online resources that you visit. Here, be careful if a resource has more than is expected between the “.com”, “.gov.au” or similar domain-name ending and the first “/” leading to the actual online resource.

Kogan Internet table radio

Sometimes the good ol’ radio can be the trusted news source

You may have to rely on getting your news from one or more trusted sources. This would include the online presence offered by these sources. Or it may be about switching on the radio or telly for the news or visiting your local newsagent to get the latest newspaper.

Examples of these are: the ABC (Radio National, Local radio, News Radio, the main TV channel and News 24 TV channel), SBS TV, or the Fairfax newspapers. Some of the music radio stations that are part of a family run by a talk-radio network like the ABC with their ABC Classic FM or Triple J services will have their hourly newscast with news from that network. But be careful when dealing with tabloid journalism or commercial talkback radio because you may be exposed to unnecessary exaggeration or distortion of facts.

As well, use the social-network platform’s or search engine’s reporting functionality to draw attention to fake news, propaganda or misinformation that is being shared or highlighted on that online service. In some cases like reporting inappropriate autocomplete predictions to Google, you may have to use the platform’s help options to hunt for the necessary resources.

Here, as we Australians faces a run of general-election cycles that can be very tantalising for clandestine foreign interference, we have to be on our guard regarding fake news, propaganda and misinformation that could affect the polls.

Send to Kindle

Facebook clamps down on voter-suppression misinformation

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Facebook Extends Ban On Election Fakery To Include Lies About Voting Requirements | Gizmodo

From the horse’s mouth

Facebook

Expanding Our Policies on Voter Suppression (Press Release)

My Comments

Over recent years, misinformation and fake news has been used as a tool to attack the electoral process in order to steer the vote towards candidates or political parties preferred by powerful interests. This has been demonstrated through the UK Brexit referendum and the the USA Presidential Election in 2016 with out-of-character results emanating from the elections. It has therefore made us more sensitive to the power of misinformation and its use in influencing an election cycle, with most of us looking towards established news outlets for our political news.

Another attack on the electoral process in a democracy is the use of misinformation or intimidation to discourage people from registering on the electoral rolls including updating their electoral-roll details or turning up to vote. This underhand tactic is typically to prevent certain communities from casting votes that would sway the vote away from an area-preferred candidate.

Even Australia, with its compulsory voting and universal suffrage laws, isn’t immune from this kind of activity as demonstrated in the recent federal byelection for the Batman (now Cooper) electorate. Here, close to the election day, there was a robocall campaign targeted at older people north of the electorate who were likely to vote in an Australian Labour Party candidate rather than the area-preferred Greens candidate.

But this is a very common trick performed in the USA against minority, student or other voters to prevent them casting votes towards liberal candidates. This manifests in accusations about non-citizens casting votes or the same people casting votes in multiple electorates.

Facebook have taken further action against voter-suppression misinformation by including it in their remit against fake news and misinformation. This action has been taken as part of Silicon Valley’s efforts to work against fake news during the US midterm Congressional elections.

At the moment, this effort applies to information regarding exaggerated identification or procedural requirements concerning enrolment on the electoral rolls or casting your vote. It doesn’t yet apply to reports about conditions at the polling booths like opening hours, overcrowding or violence. Nor does this effort approach the distribution of other misinformation or propaganda to discourage enrolment and voting.

US-based Facebook end-users can use the reporting workflow to report voter-suppression posts to Facebook. This is through the use of an “Incorrect Voting Info” option that you select when reporting posted content to Facebook. Here, it will allow this kind of information to be verified by fact-checkers that are engaged by Facebook, with false content “buried” in the News Feed along with additional relevant content being supplied with the article when people discover it.

This is alongside a constant Facebook effort to detect and remove fake accounts existing on the Facebook platform along with increased political-content transparency across its advertising platforms.

As I have always said, the issue regarding misleading information that influences the election cycle can’t just be handled by social-media and advertising platforms themselves. These platforms need to work alongside the government-run electoral-oversight authorities and similar organisations that work on an international level to exchange the necessary intelligence to effectively identify and take action against electoral fraud and corruption.

Send to Kindle