Tag: misinformation

Australian Electoral Commission takes to Twitter to rebut election disinformation

Articles Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Don’t Spread Disinformation on Twitter or the AEC Will Roast You (gizmodo.com.au)

Federal Election 2022: How the AEC Is Preparing to Combat Misinformation (gizmodo.com.au)

From the horse’s mouth

Australian Electoral Commission

AEC launches disinformation register ahead of 2022 poll (Press Release)

Previous coverage on HomeNetworking01.info

Being cautious about fake news and misinformation in Australia

My Comments

This next 18 months is to be very significant as far as general elections in Australia go. It is due to a Federal election and state elections in the most populous states taking place during that time period i.e. this year has the Federal election having to take place by May and the Victorian state election taking place by November, then the New South Wales state election taking place by March 2023.

Democracy sausages prepared at election day sausage sizzle


Two chances over the next 18 months to benefit from the democracy sausage as you cast your vote
Kerry Raymond, CC BY 4.0 <https://creativecommons.org/licenses/by/4.0>, via Wikimedia Commons

Oh yeah, more chances to eat those democracy sausages available at that school’s sausage sizzle after you cast that vote. But the campaign machine has started up early this year at the Federal level with United Australia Party ads appearing on commercial TV since the Winter Olympics, yard signs from various political parties appearing in my local neigbbourhood and an independent candidate for the Kooyong electorate running ads online through Google AdSense with some of those ads appearing on HomeNetworking01.info. This is even before the Governor General had served the necessary writs to dissolve the Federal Parliament and commence the election cycle.

Ged Kearney ALP candidate yard sign

The campaigns are underway even before the election is called

This season will be coloured with the COVID coronavirus plague and the associated vaccination campaigns, lockdowns and other public-health measures used to mitigate this virus. This will exacerbate Trump-style disinformation campaigns affecting the Australian electoral process, especially from anti-vaccination / anti-public-health-measure groups.

COVID will also exacerbate issues regarding access to the vote in a safe manner. This includes dealing with people who are isolated or quarantined due to them or their household members being struck down by the disease or allowing people on the testing and vaccination front lines to cast their vote. Or it may be about running the polling booths in a manner that is COVID-safe and assures the proper secret ballot.

There is also the recent flooding that is taking place in Queensland and NSW with it bringing about questions regarding access to the vote for affected communities’ and volunteers helping those communities. All these situations would depend on people knowing where and how to cast “convenience votes” like early or postal votes, or knowing where the nearest polling booth is especially with the flooding situation rendering the usual booths in affected areas out of action.

The Australian Electoral Commission who oversees elections at a federal level have established a register to record fake-news and disinformation campaigns that appear online to target Australians. They will also appear at least on Twitter to debunk disinformation that is swirling around on that platform and using common hashtags associated with Australian politics and elections.

Add to this a stronger wider “Stop And Consider” campaign to encourage us to be mindful about what we see, hear or read regarding the election. This is based on their original campaign ran during the 2019 Federal election to encourage us to be careful about what we share online. Here, that was driven by that Federal election being the first of its kind since we became aware of online fake-news and disinformation campaigns and their power to manipulate the vote.

There will be a stronger liasion with the AEC and the online services in relation to sharing intelligence about disinformation campaigns.

But the elephant in the room regarding election safety is IT security and cyber safety for a significant number of IT systems that would see a significant amount of election-related data being created or modified through this season.

Service Victoria contact-tracing QR code sign at Fairfield Primary School

Even the QR-code contact-tracing platforms used by state governments as part of their COVID management efforts have to be considered as far as IT security for an election is concerned – like this one at a school that is likely to be a polling place

This doesn’t just relate to the electoral oversight bodies but any government, media or civil-society setup in place during the election.

That would encompass things ranging from State governments wanting to head towards fully-electronic voter registration and electoral-roll mark-off processes, through the politicians and political parties’ IT that they use for their business process, the state-government QR-code contact tracing platforms regularly used by participants during this COVID-driven era, to the IT operated by the media and journalists themselves to report the election. Here, it’s about the safety of the participants in the election process, the integrity of the election process and the ability for voters to make a proper and conscious choice when they cast their vote.

Such systems have a significant number of risks associated with their data such as cyber attacks intended to interfere with or exfiltrate data or slow down the performance of these systems. It is more so where the perpetrators of this activity extends to adverse nation states or organised crime anywhere in the world. As well, interference with these IT systems is used as a way to create and disseminate fake news, disinformation and propaganda.

But the key issue regarding Australia’s elections being safe from disinformation and election interference is for us to be media-savvy. That includes being aware of material that plays on your emotions; being aware of bias in media and other campaigns; knowing where sources of good-quality and trustworthy news are; and placing importance on honesty, accuracy and ethics in the media.

Here, it may be a good chance to look at the “Behind The News” media-literacy TV series the ABC produced during 2020 regarding the issue of fake news and disinformation.  Sometimes you may also find that established media, especially the ABC and SBS or the good-quality newspapers may be the way to go for reliable election information. Even looking at official media releases “from the horse’s mouth” at government or political-party Websites may work as a means to identify exaggeration that may be taking place.

Having the various stakeholders encourage media literacy and disinformation awareness, along with government and other entities taking a strong stance with cyber security can be a way to protect this election season.

YouTube to examine further ways to control misinformation

Article

YouTube recommendation list

YouTube to further crack down on misinformation using warning screens and other strategies

YouTube Eyes New Ways to Stop Misinformation From Spreading Beyond Its Reach – CNET

From the horse’s mouth

YouTube

Inside Responsibility: What’s next on our misinfo efforts (Blog Post)

My Comments

YouTube’s part in controlling the spread of repeated disinformation has been found to be very limited in some ways.

This was focused on managing accounts and channels (collections of YouTube videos submitted by a YouTube account holder and curated by that holder) in a robust manner like implementing three-strikes policies when repeated disinformation occurs. It extended to managing the content recommendation engine in order to effectively “bury” that kind of content from end-users’ default views.

But new other issues have come up in relation to this topic. One of these is to continually train the artificial-intelligence / machine-learning subsystems associated with how YouTube operates with new data that represents newer situations. This includes the use of different keywords and different languages.

Another approach that will fly in the face of disinformation purveyors is to point end-users to authoritative resources relating to the topic at hand. This will typically manifest in lists of hyperlinks to text and video resources from sources of respect when there is a video or channel that has questionable material.

But a new topic or new angle on an existing topic can yield a data-void where there is scant or no information on the topic from respectable resources. This can happen when there is a fast-moving news event and is fed by the 24-hour news cycle.

Another issue is where someone creates a hyperlink to or embeds a YouTube video in their online presence. This is a common way to put YouTube video content “on the map” and can cause a video to go viral by acquiring many views. In some cases like “communications-first” messaging platforms such as SMS/MMS or instant-messaging, a preview image of the video will appear next to a message that has a link to that video.

Initially YouTube looked at the idea of preventing a questionable resource from being shared through the platform’s user interface. But questions were raised about this including limiting a viewer’s freedoms regarding taking the content further.

The issue that wasn’t even raised is the fact that the video can be shared without going via YouTube’s user interface. This can be through other means like copying the URL in the address bar if viewing on a regular computer or invoking the “share” intent on modern desktop and mobile operating systems to facilitate taking it further. In some operating systems, that can extend to printing out material or “throwing” image or video material to the large screen TV using a platform like Apple TV or Chromecast. Add to this the fact that a user will want to share the video with others as part of academic research or news report.

Another approach YouTube is looking at is based on an age-old approach implemented by responsible TV broadcasters or by YouTube with violent age-restricted or other questionable content. That is to show a warning screen, sometimes accompanied with an audio announcement, before the questionable content plays. Most video-on-demand services will implement an interactive approach at least in their “lean-forward” user interfaces where the viewer has to assent to the warning before they see any of that content.

In this case, YouTube would run a warning screen regarding the existence of disinformation in the video content before the content plays. Such an approach would make us aware of the situation and act as a “speed bump” against continual consumption of that content or following through on hyperlinks to such content.

Another issue YouTube is working on is keeping its anti-disinformation efforts culturally relevant. This scopes in various nations’ historical and political contexts, whether a news or information source is an authoritative independent source or simply a propaganda machine, fact-checking requirements, linguistic issues amongst other things. The historical and political issue could include conflicts that had peppered the nation’s or culture’s history or how the nation changed governments.

Having support for relevance to various different cultures provides YouTube’s anti-disinformation effort with some “look-ahead” sense when handling further fake-news campaigns. It also encompasses recognising where a disinformation campaign is being “shaped” to a particular geopolitical area with that area’s history being weaved in to the messaging.

But whatever YouTube is doing may have limited effect if the purveyors of this kind of nonsense use other services to host this video content. This can manifest in alternative “free-speech” video hosting services like BitChute, DTube or PeerTube. Or it can be the content creator hosting the video content on their own Website, something that becomes more feasible as the kind of computing power needed for video hosting at scale becomes cheaper.

What is being raised is YouTube using their own resources to limit the spread of disinformation that is hosted on their own servers rather than looking at this issue holistically. But they are looking at issues like the ever-evolving message of disinformation that adapts to particular cultures along with using warning screens before such videos play.

This is compared to third-party-gatekeeper approaches like NewsGuard (HomeNetworking01.info coverage) where an independent third party scrutinises news content and sites then puts their results in a database. Here various forms of logic can work from this database to deny advertising to a site or cause a warning flag to be shown when users interact with that site.

But by realising that YouTube is being used as a host for fake news and disinformation videos, they are taking further action on this issue. This is even though Google will end up playing cat-and-mouse when it comes to disinformation campaigns.

Being aware of astroturfing as an insidious form of disinformation

Article

Astroturfing more difficult to track down with social media – academic | RNZ News

My Comments

An issue that is raised in the context of fake news and disinformation is a campaign tactic known as “astroturfing”. This is something that our online life has facilitated thanks to easy-to-produce Websites on affordable Web-hosting deals along with the Social Web.

I am writing about this on HomeNetworking01.info due to astroturfing as another form of disinformation that we are needing to be careful of in this online era.

What is astroturfing?

Astroturfing is organised propaganda activity intended to create a belief of popular grassroots support for a viewpoint in relationship to a cause or policy. This activity is organised by one or more large organisations with it typically appearing as the output of concerned individuals or smaller community organisations such as a peak body for small businesses of a kind.

But there is no transparency about who is actually behind the message or the benign-sounding organisations advancing that message. Nor is there any transparency about the money flow associated with the campaign.

The Merrian-Webster Dictionary which is the dictionary of respect for the American dialect of the English language defines it as:

organized activity that is intended to create a false impression of a widespread, spontaneously arising, grassroots movement in support of or in opposition to something (such as a political policy) but that is in reality initiated and controlled by a concealed group or organization (such as a corporation).

The etymology for this word comes about as a play on words in relation to the “grassroots” expression. It alludes to the Astroturf synthetic turf implemented initially in the Astrodome stadium in Houston in the USA., with the “Astroturf” trademark becoming a generic trademark for synthetic sportsground turf sold in North America.

This was mainly practised by Big Tobacco to oppose significant taxation and regulation measures against tobacco smoking, but continues to be practised by entities whose interests are against the public good.

How does astroturfing manifest?

It typically manifests as one or more benign-sounding community organisations that appear to demonstrate popular support for or against a particular policy. It typically affects policies for the social or environmental good where there is significant corporate or other “big-money” opposition to these policies.

The Internet era has made this more feasible thanks to the ability to create and host Websites for cheap. As online forums and social media came on board, it became feasible to set up multiple personas and organisational identities on forums and social-media platforms to make it appear as though many people or organisations are demonstrating popular support for the argument. It is also feasible to interlink Websites and online forums or Social-Web presences by posting a link from a Website or blog in a forum or Social-Web post or having articles on a Social Web account appear on one’s Website.

The multiple online personas created by one entity for this purpose of demonstrating the appearance of popular support are described as “sockpuppet” accounts. This is in reference to children’s puppet shows where two or three puppet actors use glove puppets made out of odd socks and can manipulate twice the number of characters as each actor. This can happen synchronously with a particular event that is in play, be it the effective date of an industry reform or set of restrictions; a court case or inquiry taking place; or a legislature working on an important law.

An example of this that occurred during the long COVID-19 lockdown that affected Victoria last year where the “DanLiedPeopleDied” and “DictatorDan” hashtags were manipulated on Twitter to create a sentiment of popular distrust against Dan Andrews.  Here it was identified that a significant number of the Twitter accounts that drove these hashtags surfaced or changed their behaviour synchronously to the lockdown’s effective period.

But astroturfing can manifest in to offline / in-real-life activities like rallies and demonstrations; appearances on talkback radio; letters to newspaper editors, pamphlet drops and traditional advertising techniques.

Let’s not forget that old-fashioned word-of-mouth advertising for an astroturfing campaign can take place here like over the neighbour’s fence, at the supermarket checkout or around the office’s water cooler.

Sometimes the online activity is used to rally for support for one or more offline activities or to increase the amount of word-of-mouth conversation on the topic. Or the pamphlets and outdoor advertising will carry references to the campaign’s online resources so people can find out more “from the horse’s mouth”. This kind of material used for offline promotion can be easily and cheaply produced using “download-to-print” resources, print and copy shops that use cost-effective digital press technology, firms who screen-print T-shirts on demand from digital originals amongst other online-facilitated technologies.

An example of this highlighted by Spectrum News 1 San Antonio in the USA was the protest activity against COVID-19 stay-at-home orders in that country. This was alluding to Donald Trump and others steering public opinion away from a COVID-safe USA.

This method of deceit capitalises on popular trust in the platform and the apparently-benign group behind the message or appearance of popular support for that group or its message. As well, astroturfing is used to weaken any true grassroots support for or against the opinion.

How does astroturfing affect media coverage of an issue?

The easily-plausible arguments tendered by a benign-sounding organisation can encourage journalists to “go with the flow” regarding the organisation’s ideas. It can include treating the organisation’s arguments at face value for a supporting or opposing view on the topic at hand especially where they want to create a balanced piece of material.

This risk is significantly increased in media environments where there isn’t a culture of critical thinking with obvious examples being partisan or tabloid media. Examples of this could be breakfast/morning TV talk shows on private free-to-air TV networks or talkback radio on private radio stations.

But there is a greater risk of this occurring while there is increasingly-reduced investment in public-service and private news media. Here, the fear of newsrooms being reduced or shut down or journalists not being paid much for their output can reduce the standard of journalism and the ability to perform proper due diligence on news sources.

There is also the risk of an astroturfing campaign affecting academic reportage of the issue. This is more so where the student doesn’t have good critical-thinking and research skills and can be easily swayed by spin. It is more so with secondary education or some tertiary education situations like vocational courses or people at an early stage in the undergraduate studies.

How does astroturfing affect healthy democracies

All pillars of government can and do fall victim to astroturfing. This can happen at all levels of government ranging from local councils through state or regional governments to the national governments.

During an election, an astroturfing campaign can be used to steer opinion for or against a political party or candidate who is standing for election. In the case of a referendum, it can steer popular opinion towards or against the questions that are the subject of the referendum. This is done in a manner to convey the veneer of popular grassroots support for or against the candidate, party or issue.

The legislature is often a hotbed of political lobbying by interest groups and astroturfing can be used to create a veneer of popular support for or against legislation or regulation of concern to the interest group. As well, astroturfing can be used a a tool to place pressure on legislature members to advance or stall a proposed law and, in some cases, force a government out of power where there is a stalemate over that law.

The public-service agencies of the executive government who have the power to permit or veto activity are also victims of astroturfing. This comes in the form of whether a project can go ahead or not; or whether a product is licensed for sale within the jurisdiction. It can also affect the popular trust in any measures that officials in the executive government execute.

As well, the judiciary can be tasked with handling legal actions launched by pressure groups who use astroturfing to  create a sense of popular support to revise legislation or regulation. It also includes how jurors are influenced in any jury trial or which judges are empanelled in a court of law, especially a powerful appellate court or the jurisdiction’s court of last resort.

Politicians, significant officials and key members of the judiciary can fall victim to character assassination campaigns that are part of one or more astroturfing campaigns. This can affect continual popular trust in these individuals and can even affect the ability for them to live or conduct their public business in safety.

Here, politicians and other significant government officials are increasingly becoming accessible to the populace. It is being facilitated by themselves maintaining Social-Web presence using a public-facing persona on the popular social-media platforms, with the same account-name or “handle” being used on the multiple platforms. In the same context, the various offices and departments maintain their social-Web presence on the popular platforms using office-wide accounts. This is in addition to other online presences like the ministerial Web pages or public-facing email addresses they or the government maintain.

These officials can be approached by interest groups who post to the official’s Social-Web presence. Or a reference can be created to the various officials and government entities through the use of hashtags or mentions of platform-native account names operated by these entities when someone creates a Social Web post about the official or issue at hand. In a lot of cases, there is reference to sympathetic journalists and media organisations in order to create media interest.

As well, one post with the right message and the right mix of hashtags and referenced account names can be viewed by the targeted decision makers and the populace at the same time. Then people who are sympathetic to that post’s message end up reposting that message, giving it more “heat”.

Here, the Social Web is seen as providing unregulated access to these powerful decision-makers. That is although the decision-makers work with personal assistants or similar staff to vet content that they see. As well, there isn’t any transparency about who is posting the content that references these officials i.e. you don’t know whether it is a local constituent or someone pressured by an interest group.

What can be done about it

The huge question here is what can be done about astroturfing as a means of disinformation.

A significant number of jurisdictions implement attribution requirements for any advertising or similar material as part of their fair-trading, election-oversight, broadcasting, unsolicited-advertising or similar laws. Similarly a significant number of jurisdictions implement lobbyist regulation in relationship to who has access to the jurisdiction’s politicians. As outlined in the RNZ article that I referred to, New Zealand is examining astroturfing in the context of whether they should regulate access to their politicians.

But most of these laws regulate what goes on within the offline space within the jurisdiction that they pertain to. It could become feasible for foreign actors to engage in astroturfing and similar campaigns from other territories across the globe using online means without any action being taken.

The issue of regulating lobbyist access to the jurisdiction’s politicians or significant officials can raise questions. Here it could be about whether the jurisdiction’s citizens have a continual right of access to their elected government or not. As well, there is the issue of assuring governmental transparency and a healthy dialogue with the citizens.

The 2016 fake-news crisis which highlighted the distortion of the 2016 US Presidential Election and UK Brexit referendum became a wake-up call regarding how the online space can be managed to work against disinformation.

Here, Silicon Valley took on the task of managing online search engines, social-media platforms and online advertising networks to regulate foreign influence and assure accountability when it comes to political messaging in the online space. This included identity verification of advertiser accounts or keeping detailed historical records of ads from political advertisers on ad networks or social media or clamping down on coordinated inauthentic behaviour on social media platforms.

In addition to this, an increasingly-large army of “fact-checkers” organised by credible newsrooms, universities and similar organisations appeared. These groups researched and verified claims which were being published through the media or on online platforms and would state whether they are true or false based on their research.

What we can do is research further and trust our instincts when it comes to questionable claims that come from apparently-benign organisations. Here we can do our due diligence and check for things like how long an online account has been in operation for, especially if it is synchronous to particular political, regulatory or similar events occurring or being on the horizon.

Here you have to look out for behaviours in the online or offline content like:

  • Inflammatory or manipulative language that plays on your emotions
  • Claims to debunk topic-related myths that aren’t really myths
  • Questioning or pillorying those exposing the wrongdoings core to the argument rather the actual wrongdoings
  • A chorus of the same material from many accounts

Conclusion

We need to be aware of astroturfing as another form of disinformation that is prevalent in the online age. Here it can take in people who are naive and accept information at face value without doing further research on what is being pushed.

Google fact-checking now applies to image searches

Articles

Google search about Dan Andrews - Chrome browser in Windows 10

Google to add fact checking to images in its search user interfaces

Google adds a fact check feature for images | CNet

From the horse’s mouth

Google

Bringing fact check information to Google Images (Blog Post)

My Comments

Increasingly, images and video are being seen as integral to news coverage with most of us seeing them, especially photographs, of importance when corroborating a fact or news story.

But these are becoming weaponised to tell a different truth compared to what is actually captured by the camera. One way is to use the same or a similar image to corroborate a different fact, with this including the use of image-editing tools to doctor the image so it tells a different story.

I have covered this previously when talking about the use of reverse-image-search tools like Tineye or Google Image Search to verify the authenticity of an image and . It will be the same kind of feature that Google has enabled in its search interface when you “google” for something, or in its news-aggregation platforms.

Google is taking this further for people who search for images using their search tools. Here, they are adding images to their fact-check processes so it is easy to see whether an image has been used to corroborate questionable information. You will see a “fact-check” indicator near the image thumbnail and when you click or tap on the image for a larger view or more details, you will see some details about whether the image is true or not.

A similar feature appears on the YouTube platform for exhibiting details about the veracity of video content posted there. But this feature currently is available to users based in Brazil, India and the USA and I am not sure whether it will be available across all YouTube user interfaces, especially native clients for mobile and set-top platforms.

It is in addition to Alphabet, their parent company, offering a free tool to check whether an image has been doctored. This is because meddling with an image to constitute something else using something like Adobe Photoshop or GIMP is being seen as a way to convey a message that isn’t true. The tool, called Assembler, uses artificial intelligence and algorithms that detect particular forms of image manipulation to indicate the veracity of an image.

But I would also see the rise of tools that analyse audio and video material to identify deepfake activity, or video sites, podcast directories and the like using a range of tools to identify the authenticity of content made available through them. This may include “fact-check” labels with facts being verified by multiple newsrooms and universities; or the content checked for out-of-the-ordinary editing techniques. It can also include these sites and directories implementing a feedback loop so that users can have questionable content verified.

Australian Electoral Commission weighs in on online misinformation

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Australian Electoral Commission boots online blitz to counter fake news | ITNews

Previous coverage

Being cautious about fake news and misinformation in Australia

From the horse’s mouth

Australian Electoral Commission

Awareness Page

Press Release

My Comments

I regularly cover the issue of fake news and misinformation especially when this happens around election cycles. This is because it can be used as a way to effectively distort what makes up a democratically-elected government.

When the Victorian state government went to the polls last year, I ran an article about the issue of fake news and how we can defend ourselves against it during election time. This was because of Australia hosting a run of elections that are ripe for a concerted fake-news campaign – state elections for the two most-populous states in the country and a federal election.

It is being seen as of importance due to fact that the IT systems maintained by the Australian Parliament House and the main Australian political parties fell victim to a cyber attack close to February 2019 with this hack being attributed to a nation-state. This can lead to the discovered information being weaponised against the candidates or their political parties similar to the email attack against the Democrat party in the USA during early 2016 which skewed the US election towards Donald Trump and America towards a highly-divided nation.

The issue of fake news, misinformation and propaganda has been on our lips over the last few years due to us switching away from traditional news-media sources to social media and online search and news-aggregation sites. Similarly, the size of well-respected newsrooms is becoming smaller due to reduced circulation and ratings for newspapers and TV/radio stations driven by our use of online resources. This leads to poorer-quality news reporting that is a similar standard to entertainment-focused media like music radio.

A simplified low-cost no-questions-asked path has been facilitated by personal computing and the Internet to create and present material, some of which can be questionable. It is now augmented by the ability to create deepfake image and audio-visual content that uses still images, audio or video clips to represent a very convincing falsehood thanks to artificial-intelligence. Then this content can be easily promoted through popular social-media platforms or paid positioning in search engines.

Such content takes advantage of the border-free nature of the Internet to allow for an actor in one jurisdiction to target others in another jurisdiction without oversight of the various election-oversight or other authorities in either jurisdiction.

I mentioned what Silicon Valley’s online platforms are doing in relation to this problem such as restricting access to online advertising networks; interlinking with fact-check organisations to identify fake news; maintaining a strong feedback loop with end-users; and operating robust user-account-management and system-security policies, procedures and protocols. Extant newsrooms are even offering fact-check services to end-users, online services and election-oversight authorities to build up a defence against misinformation.

But the Australian Electoral Commission is taking action through a public-education campaign regarding fake news and misinformation during the Federal election. They outlined that their legal remit doesn’t cover the truthfulness of news content but it outlines whether the information comes from a reliable or recognised source, how current it is and whether it could be a scam. Of course there is the issue of cross-border jurisdictional issues especially where material comes in from overseas sources.

They outlined that their remit covers the “authorisation” or provenance of the electoral communications that appear through advertising platforms. As well, they underscore the role of other Australian government agencies like the Australian Competition and Consumer Commission who oversee advertising issues and the Australian Communications And Media Authority who oversee broadcast media. They also have provided links to the feedback and terms-and-conditions pages of the main online services in relationship to this issue.

These Federal agencies are also working on the issue of electoral integrity in the context of advertising and other communication to the voters by candidates, political parties or other entities; along with the “elephant in the room” that is foreign interference; and security of these polls including cyber-security.

But what I have outlined in the previous coverage is to look for information that qualifies the kind of story being published especially if you use a search engine or aggregated news view; to trust your “gut reaction” to the information being shared especially if it is out-of-touch with reality or is sensationalist or lurid; checking the facts against established media that you trust or other trusted resources; or even checking for facts “from the horse’s mouth” such as official press releases.

Inspecting the URL in your Web browser’s address bar before the first “/” to see if there is more that what is expected for a news source’s Web site can also pay dividends. But this can be a difficult task if you are using your smartphone or a similarly-difficult user interface.

I also even encourage making more use of established trusted news sources including their online presence as a primary news source during these critical times. Even the simple act of picking up and reading that newspaper or turning on the radio or telly can be a step towards authoritative news sources.

As well, I also encourage the use of the reporting functionality or feedback loop offered by social media platforms, search engines or other online services to draw attention to contravening content This was an action I took as a publisher regarding an ad that appeared on this site which had the kind of sensationalist headline that is associated with fake news.

The issue of online misinformation especially during general elections is still a valid concern. This is more so where the online space is not subject to the kinds of regulation associated with traditional media in one’s home country and it becomes easy for foreign operators to launch campaigns to target other countries. What needs to happen is a strong information-sharing protocol in order to place public and private stakeholders on alert about potential election manipulation.

WhatsApp now highlights messaging services as a fake-news vector

Articles

WhatsApp debuts fact-checking service to counter fake news in India | Engadget

India: WhatsApp launches fact-check service to fight fake news | Al Jazeera

From the horse’s mouth

WhatsApp

Tips to help prevent the spread of rumors and fake news {User Advice)

Video – Click or tap to play

My Comments

As old as the World-Wide-Web has been, email has been used as a way to share online news amongst people in your social circle.

Typically this has shown up in the form of jokes, articles and the like appearing in your email inbox from friends, colleagues or relatives, sometimes with these articles forwarded on from someone else. It also has been simplified through the ability to add multiple contacts from your contact list to the “To”, “Cc” or “Bcc” fields in the email form or create contact lists or “virtual contacts” from multiple contacts.

The various instant-messaging platforms have also become a vector to share links to articles hosted somewhere on the Internet in the same manner as email, as has the carrier-based SMS and MMS texting platforms when used with a smartphone.

But the concern raised about the distribution of misinformation and fake news has been focused on the popular social media and image / video sharing platforms. This is while fake news and misinformation creep in to your Inbox or instant-messaging client thanks to one or more of your friends who like passing on this kind of information.

WhatsApp, a secure instant-messaging platform owned by Facebook, is starting to tackle this issue head-on with its Indian userbase as that country enters its election cycle for the main general elections. They are picking up on the issue of fake news and misinformation thanks to the Facebook Group being brought in to the public limelight due to this issue. As well, Facebook have been recently clamping down on inauthentic behaviour that was targeting India and Pakistan.

WhatsApp now highlighting fake news problem in India, especially as this platform is seen as a popular instant-messenger within that country. They are working with a local fact-checking startup called Proto to create the Checkpoint Tipline to allow users to have links that are sent to them verified. It is driven on the base of a “virtual contact” that the WhatsApp users forward questionable links or imagery to.

But due to the nature of its end-to-end encryption and the fact that service is purely a messaging service, there isn’t the ability to verify or highlight questionable content. But they also have placed limits on the number of users one can broadcast a message to in order to tame the spread of rumours.

It is also being used as a tool to identify the level of fake news and misinformation taking place on the messenger platform and to see how much of a vector these platforms are.

Personally, I would like to see the various fact-checking agencies have an email mailbox where you can forward emails with questionable links and imagery to so they can verify that rumour mail doing the rounds. It could operate in a similar vein to how the banks, tax offices and the like have set up mailboxes for people to forward phishing email to so these organisations can be aware of the phishing problem they are facing.

The only problem with this kind of service is that people who are astute and savvy are more likely to use it. This may not affect those of us who just end up passing on whatever comes our way.

Being cautious about fake news and misinformation in Australia

Previous Coverage

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Are you sure you are casting your vote or able to cast your vote without undue influence?

Being aware of fake news in the UK

Fact-checking now part of the online media-aggregation function

Useful Australian-based resources

ABC Fact Check – ran in conjunction with RMIT University

Political Parties

Australian Labor Party (VIC, NSW)

Liberal Party – work as a coalition with National Party (VIC, NSW)

National Party – work as a coalition with Liberal Party (VIC, NSW)

Australian Greens – state branches link from main page

One Nation (Pauline Hanson)

Katter’s Australia Party

Derryn Hinch’s Justice Party

Australian Conservatives

Liberal Democratic Party

United Australia Party

My Comments

Over the next six months, Australia will see some very critical general elections come to pass both on a federal level and in the two most-highly-populated states that host most of that country’s economic and political activity. On October 30 2018, the election writs were recently served in the state of Victoria for its general election to take place on November 24 2018. Then, on the 23 March 2019, New South Wales will expect to go to the polls for its general election. Then the whole country will expect to go to the polls for the federal general election by 18 May 2019.

As these election cycles take place over a relatively short space of time and affecting , there is a high risk that Australians could fall victim to misinformation campaigns. This can subsequently lead to state and federal ballots being cast that steer the country against the grain like what happened in 2016 with the USA voting in Donald Trump as their President and the UK voting to leave the European Union.

Google News - desktop Web view

Look for tags within Google News that describe the context of the story

The issue of fake news and misinformation is being seen as increasingly relevant as we switch away from traditional media towards social media and our smartphones, tablets and computers for our daily news consumption.  This is thanks to the use of online search and news-aggregation services like Google News; or social media like Facebook or Twitter which can be seen by most of us as an “at-a-glance” view of the news.

As well, a significant number of well-known newsrooms are becoming smaller due to the reduced circulation and ratings for their newspaper or radio / TV broadcast thanks to the use of online resources for our news. It can subsequently lead to poor-quality news reporting and presentation with a calibre equivalent to the hourly news bulletin offered by a music-focused radio station. It also leads to various mastheads plagiarising content from other newsrooms that place more value on their reporting.

The availability of low-cost or free no-questions-asked Web and video hosting along with easy-to-use Web-authoring, desktop-publishing and desktop-video platforms make it feasible for most people to create a Web site or online video channel. It has led to an increased number of Websites and video channels that yield propaganda and information that is dressed up as news but with questionable accuracy.

Another factor that has recently been raised in the context of fake news, misinformation and propaganda is the creation and use of deepfake image and audio-visual content. This is where still images, audio or video clips that are in the digital domain are altered to show a falsehood using artificial-intelligence technology in order to convince viewers that they are dealing with original audio-visual resource. The audio content can be made to mimic an actual speaker’s voice and intonation as part of creating a deepfake soundbite or video clip.

It then becomes easy to place fake news, propaganda and misinformation onto easily-accessible Web hosts including YouTube in the case of videos. Then this content would be propagated around the Internet through the likes of Twitter, Facebook or online bulletin boards. It is more so if this content supports our beliefs and enhances the so-called “filter bubble” associated with our beliefs and media use.

There is also the fact that newsrooms without the resources to rigorously scrutinise incoming news could pick this kind of content up and publish or broadcast this content. This can also be magnified with media that engages in tabloid journalism that depends on sensationalism to get the readership or keep listeners and viewers from switching away.

The borderless nature of the Internet makes it easy to set up presence in one jurisdiction to target the citizens of another jurisdiction in a manner to avoid being caught by that jurisdiction’s election-oversight, broadcast-standards or advertising-standards authority. Along with that, a significant number of jurisdictions focus their political-advertising regulation towards the traditional media platforms even though we are making more use of online platforms.

Recently, the Australian Electoral Commission along with the Department of Home Affairs, Australian Federal Police and ASIO have taken action on an Electoral Integrity Assurance Task Force. It was in advance of recent federal byelections such as the Super Saturday byelections, where there was the risk of clandestine foreign interference taking place that could affect the integrity of those polls.

But the issue I am drawing attention to here is the use of social media or other online resources to run fake-news campaigns to sway the populace’s opinion for or against certain politicians. This is exacerbated by the use of under-resourced newsrooms that could get such material seen as credible in the public’s eyes.

But most of Silicon Valley’s online platforms are taking various steps to counter fake news, propaganda and disinformation using these following steps.

Firstly, they are turning off the money-supply tap by keeping their online advertising networks away from sites or apps that spread misinformation.

They also are engaging with various fact-check organisations to identify fake news that is doing the rounds and tuning their search and trending-articles algorithms to bury this kind of content.

Autocomplete list in Google Search Web user interface

Google users can report Autocomplete suggestions that they come across in their search-engine experience/

They are also maintaining a feedback loop with their end-users by allowing them to report fake-news entries in their home page or default view. This includes search results or autocomplete entries in Google’s search-engine user interface. This is facilitated through a “report this” option that is part of the service’s user interface or help pages.

Most of the social networks and online-advertising services are also implementing robust user-account-management and system-security protocols. This includes eliminating or suspending accounts that are used for misinformation. It also includes checking the authenticity of accounts running pages or advertising campaigns that are politically-targeted through methods like street-address verification.

In the case of political content, social networks and online-advertising networks are implementing easily-accessible archives of all political advertising or material that is being published including where the material is being targeted at.

ABC FactCheck – the ABC’s fact-checking resource that is part of their newsroom

Initially these efforts are taking place within the USA but Silicon Valley is rolling them out across the world at varying timeframes and with local adaptations.

Personally, I would still like to see a strong dialogue between the various Social Web, search, online-advertising and other online platforms; and the various government and non-government entities overseeing election and campaign integrity and allied issues. This can be about oversight and standards regarding political communications in the online space along with data security for each stakeholder.

What can you do?

Look for any information that qualifies the kind of story if you are viewing a collection of headlines like a search or news-aggregation site or app. Here you pay attention to tags or other metadata like “satire”, “fact checking” or “news” that describe the context of the story or other attributes.

Most search engines and news-aggregation Websites will show up this information in their desktop or mobile user interface and are being engineered to show a richer set of details. You may find that you have to do something extra like click a “more” icon or dwell on the heading to bring up this extra detail on some user interfaces.

Trust your gut reaction to that claim being shared around social media. You may realise that a claim associated with fake news may be out of touch with reality. Sensationalised or lurid headlines are a usual giveaway, along with missing information or copy that whips up immediate emotional responses from the reader.

Check the host Website or use a search engine like Google to see if the news sources you trust do cover that story. You may come across one or more tools that identify questionable news easily, typically in the form of a plug-in or extension that works with your browser if its functionality can be expanded with these kind of add-ons. It is something that is more established with browsers that run on regular Windows, Mac or Linux computers.

It is also a good idea to check for official press releases or similar material offered “from the horse’s mouth” by the candidates, political parties, government departments or similar organisations themselves. In some cases during elections, some of the candidates may run their own Web sites or they may run a Website that links from the political party’s Website. Here, you will find them on the Websites ran by these organisations and may indicate if you are dealing with a “beat-up” or exaggeration of the facts.

As you do your online research in to a topic, make sure that you are familiar with how the URLs are represented on your browser’s address bar for the various online resources that you visit. Here, be careful if a resource has more than is expected between the “.com”, “.gov.au” or similar domain-name ending and the first “/” leading to the actual online resource.

Kogan Internet table radio

Sometimes the good ol’ radio can be the trusted news source

You may have to rely on getting your news from one or more trusted sources. This would include the online presence offered by these sources. Or it may be about switching on the radio or telly for the news or visiting your local newsagent to get the latest newspaper.

Examples of these are: the ABC (Radio National, Local radio, News Radio, the main TV channel and News 24 TV channel), SBS TV, or the Fairfax newspapers. Some of the music radio stations that are part of a family run by a talk-radio network like the ABC with their ABC Classic FM or Triple J services will have their hourly newscast with news from that network. But be careful when dealing with tabloid journalism or commercial talkback radio because you may be exposed to unnecessary exaggeration or distortion of facts.

As well, use the social-network platform’s or search engine’s reporting functionality to draw attention to fake news, propaganda or misinformation that is being shared or highlighted on that online service. In some cases like reporting inappropriate autocomplete predictions to Google, you may have to use the platform’s help options to hunt for the necessary resources.

Here, as we Australians faces a run of general-election cycles that can be very tantalising for clandestine foreign interference, we have to be on our guard regarding fake news, propaganda and misinformation that could affect the polls.