Category: Fake News, Disinformation and Propaganda

How to redefine media sources now we are online

TenPlay Website screenshot with some FAST channels offered by the Ten Network.

Traditional media like the Ten Network commercial free-to-air TV establishing an online presence

I often hear remarks about people, especially youth and young adults, not using traditional media like newspapers or broadcast media for their news. Rather they are using social media or other online sources as news sources and reading their news from a smartphone or tablet.

This has been driven by the “cord-cutting” issue within the USA especially where younger people are cancelling pay-TV subscriptions and relying primarily on online media. In some cases, this underscores the idea of moving away from established media outlets towards what there is on the Internet.

But the way we view media is changing now that we are in the online age.

Media publisher types seem to be divided between two major classes.

One class is the established media outlet who has been a publisher or a broadcaster for a long time. They have been associated with long setup costs and requirements such as printing presses and distribution infrastructure including newsstands; or broadcast licences and RF infrastructure (transmitters, satellite systems or cable-TV setups). Examples of these are the major newspapers or the public-service or commercial broadcasters.

The other class are the online-first publishers like bloggers, podcasters or YouTubers who publish their content primarily on one or more online platforms. They typically set themselves up on the online platforms typically by creating an account; or in some cases, they rent hosting space at a Web hosting provider and buy “online real estate” in the form of one or more domain names to create a Website.

An increasing role of online services that aggregate content

Lenovo Yoga 5G convertible notebook press image courtesy of Lenovo

Smartphones, tablets and laptops being the devices we consume media on nowadays

Then there the online platforms like social media, news aggregators, podcast directories, Internet-radio directories and video servers that simply serve a purpose of aggregating content produced by other online publishers.

This can also include portals like MSN.com, Google or Yahoo hat show news at a glance on a home page along with search directories like Google or Bing. We are also expecting to see more of these services come about as the Internet becomes the backbone of media distribution.

Publishers have seen these services as being of questionable value due to them not being able to monetise their work especially if their work is reproduced verbatim by the aggregator. This has caused continual spats between the established publishers and Big Tech who have been seeing themselves as publishers rather than aggregators.

What is the reality

Established media appearing online

Feedly screenshot

RSS Webfeeds appearing through Feedly

But established media outlets have also set up multiple online front-ends whether free or paid. These also include at least a news portal ran by the publisher or broadcaster. But they also include RSS Webfeeds, podcasts or videos that appear in podcast and video directories, and content that is posted to the Social Web by the publisher.

Examples include TV broadcasters setting up “broadcast video on demand” platforms where they have their broadcast content available for viewing at any time; established news outlets offering their video reports on YouTube or some radio stations running online news portals. Or there are the Internet radio services that work with apps, Internet radios or smart speakers to bring traditional broadcast radio from anywhere in the world to you without the need of local RF-level presence.

Add to this attempts by TV and radio technology guardians to blur the distinction between consuming broadcast media via RF and Internet means and assure a familiar user experience when listening to or watching broadcast content. There is also pressure from established broadcasters to improve the discovery of their content that is offered linearly or on-demand through newer Internet-based devices.

Media outlets catering to the younger audience

A significant trend for established media publishers is to establish “youth-focused” media brands intended to appeal to teenagers and young adults. These nowadays appear exclusively or primarily on online platforms and the content is created and edited by young adults. As well, the content-presentation style is designed to appeal to youth and young adults, typically with snappy audio and video presentation, youthfully-fresh writing styles or simply on-trend with the young audience.

It is in addition to new “young-audience-first” media outlets appearing with content pitched to the young audience. Previously this would have been one of these media outlets running a magazine or radio station, where the content was primarily about fashion or pop-culture trends. But nowadays this manifests in the form of a podcast or online masthead accessible on the Internet and covers all issues of interest to young people including lifestyle issues.

This is something that some of the established media have been working on prior to the Internet, typically through running magazines, radio stations or broadcast shows that appealed to younger people. Here, these shows were seen as being complementary to the rest of that media outlet’s output so limiting the content of that brand to topics like the latest pop culture news.

Here, youth-focused media was seen as a way for business to court a valuable market that was represented by young people, using these platforms to pitch products and services relevant to that age group. Or, for broadcasters that didn’t rely on advertising, it was a way to see themselves as being relevant and attractive to younger audiences.

A history of adapting to new realities

These are steps being taken by established media outlets in order to keep themselves with the online generation, especially the younger generations. It is similar in prior times to how newspaper publishers had to cope with the new radio broadcasters when radio became popular, and how radio broadcasters had to cope when TV became popular and newer pure-play TV broadcasters appeared on the TV dial.

The main example is to have a Web-driven online newspaper that is offered for free, through donations or through subscriptions dependent on the publisher’s business model. Other approaches also include audio-on-demand (podcast) or video-on-demand material, or having the broadcast stream offered by Internet means.

The media outlets often see this as a way not just to stay relevant but to try different offerings or reach different markets in a low-risk manner. For example, The Guardian and the Daily Mail, two British newspapers, are reaching in to other Anglophone territories by offering an online version of their mastheads that can be read there. Or Communications Fiji Limited who run a handful of radio stations in Fiji and Papua New Guinea are running a Fiji-relevant online newspaper masthead known as Fiji Village.

There has always been criticism about new media types appearing. This tended to occur when there was an increased saturation of that media type amongst the population and the media type offered content that was popular. In a lot of cases, this criticism was directed at newer media platforms that were primarily about content that panders to our base instincts.

Online access to press releases

Most organisations including governments are publishing resources “from the horse’s mouth” online under their brand. These resources typically appear as press releases, blog posts or similar content including audiovisual content. Here, you can find them on the organisation’s Website or on online-service accounts operated by the organisation.

They can come in to play for verifying the authenticity of news material and even be useful for working against exaggeration by media outlets. Sometimes the blog posts can be used to “flesh out” what is being talked about in the press releases.

The issues to think of

A key issue is encouraging people to aware of the quality of news and information they consume from media in general.

Here, the blame about poor-quality news and information tends to be laid at the feet of online media. But these problems appear both with traditional media and the new online media.

For example, tabloid journalism, especially of a partisan nature, has been seen as a long-term media issue. It affects offline media, in the form of “red-top” tabloid newspapers, talkback radio hosted by “shock jocks”, tabloid-style public-affairs shows on Australian commercial TV, and far-right cable-TV news channels; as well as online media especially partisan online media outlets. Here, the issues raised include chequebook journalism, portraying marginalised communities in a negative light, and pandering to personal biases through emotion-driven copywriting.

In the online context, it is often referred to as “click-bait” because end-users are encouraged to click on the material to see further information about the topic. This often leads to seeing many ads for questionable online businesses.

This issue has become more intense since 2016 when it was realised that fake news and disinformation spread through social media was used to steer the outcome of the Brexit referendum and the US presidential election held that year.

What can be done

Media literacy

A key requirement is to encourage media literacy through education. An increasing number of schools are integrating media-literacy in to secondary-school curriculums, usually under various subjects.

As well, some libraries and community-education facilities are teaching media literacy to adults as short courses. You may find that some secondary schools may run a media-literacy short course as part of their community-education effort.

In addition, respected media outlets including public service broadcasters are supplying material about media literacy. Google is also joining in on the media-literacy game by running YouTube videos on that topic. This is thanks to YouTube being where videos with questionable information are being published.

Examples of this include the ABC’s “Media Watch” TV show that critiques media and marketing or their “Behind The News” media-literacy video series that was ran during 2020 as COVID started to take hold.

Here, media literacy is about being able to “read between the lines” and assess the veracity of news content. This includes being able to assess news sources carefully and critically as well as assess how news outlets are treating particular topics.

Flagging, debunking and prebunking misinformation and disinformation

Another effort that is taking place is the flagging, debunking and “prebunking” of misinformation and disinformation.

Fact-check websites ran by established media outlets and universities draw our attention to questionable information and highlight whether it is accurate or not. As well, they write up information to substantiate their findings regarding the questionable information and this is derived from collections of established knowledge o the topic.

Here, one could check through one or more of these Websites to see whether the information is accurate or not and why it is or isn’t accurate.

As well, mainstream online service providers are joining in the game by flagging potential disinformation and providing links to accurate resources on the topic. This was an effort that was very strong through the COVID pandemic due to the misinformation and disinformation that is swirling around cyberspace during the height of the pandemic. Such disinformation was at risk of causing people to make the wrong health choices regarding limiting the spread of COVID like not masking up or avoiding COVID vaccinations.

Then there are “prebunking” efforts typically undertaken by government departments or civil society to warn us about potential disinformation and propaganda. This is to make the public aware of the questionable information in a preemptive manner and publish accurate information on the topic at hand.

A common analogy that is used is how vaccinations work to defend our bodies against particular diseases or reduce the harm they can cause. I also use the common reference to the “guardrail at the top of the cliff” versus the “ambulance at the bottom of the cliff” where the guardrail protects against incidents occurring.

It can be in the form of online resources like FAQs carrying accurate information on the topic at and, typically to rebut the common myths. This can be augmented with other efforts like public-service announcements in traditional media or experts making appearances in the public space or on broadcasts to talk about these issues.

Conclusion

New approaches to distributing and consuming news will require us across the generations to adapt our thoughts regarding the different media outlets that exist. This will be more about the quality of the journalism that these outlets provide rather than how the news is distributed.

This will including identifying sources of good-quality journalism and, where applicable, supporting these sources in whatever way possible. As well, keeping ourselves media literate will also be an important task.

When use of multiple public accounts isn’t appropriate

Article

Facebook login page

There are times where use of public accounts isn’t appropriate

The murky world of politicians’ covert social media accounts (sbs.com.au)

My Comments

Just lately there have questions raised about how Australian politicians and their staff members were operating multiple online personas to disparage opponents, push political ideologies or “blow their own trumpet”.

It is being raised in connection with legislative reforms that the Australian Federal Government are working on to place the onus of responsibility regarding online defamation on whoever is posting the defamatory material in a comments trail on an online service. This is different to the status quo of having whoever is setting up or managing an online presence like a Website or Facebook Page being liable for defamation.

Here, it is in the context of what is to be expected for proper political communication including any “government-to-citizen” messaging. This is to make sure we can maintain trust in our government and that all political messaging is accurate and authentic in the day and age of fake news and disinformation.

I see this also being extended to business communication, including media/marketing/PR and non-profit advocacy organisations who have a high public profile. Here, it is to assure that any messaging by these entities is authentic so that people can build trust in them.

An example of a public-facing online persona – the Facebook page of Dan Andrews, the current Premier of Victoria

What I refer to as an “online persona” are email, instant-messaging and other communications-service accounts; Web pages and blogs; and presences on various part of the Social Web that are maintained by a person or organisation. It is feasible for a person or organisation to maintain a multiplicity of online personas like multiple email accounts or social-media pages that are used to keep public and private messaging separate, whether that’s at the business or personal level.

The normal practice for public figures at least is to create a public online persona and one or two private online personas such as an intra-office persona for colleagues and a personal one for family and friends. This is a safety measure to keep public-facing communications separate from business and personal communications.

Organisations may simply create particular online personas for certain offices with these being managed by particular staff members. In this case, they do this so that communications with a particular office stay the same even as office-holders change. As well, there is the idea of keeping “business-private” material separate from public-facing material.

In this case, the online personas reference the same entity by name at least. This is to assure some form of transparency about who is operating that persona. Other issues that come in to play here include which computing devices are being used to drive particular online personas.

This is more so for workplaces and businesses that own computing and communications hardware and have staff communicate on those company-owned devices for official business. But staff members use devices they bought themselves to operate non-official online personas. This is although more entities are moving towards “BYOD” practices where staff members use their own devices for official work use and there are systems in place to assure secure confidential work from staffer-owned devices.

But there is concern about some Australian politicians creating multiple public-facing personas in order to push various ideologies. Here, these personas are operated in an opaque manner in order to create multiple discrete persons. This technique, when used to appear as though many vouch for a belief or ideology, is referred to under terms like sockpuppetry or astroturfing.

This issue is being raised in the context of government-citizen communication in the online era. But it can also be related to individuals, businesses, trade unions or other organisations who are using opaque means to convey a sense of “popular support” for the same or similar messages.

What I see as being appropriate with establishing multiple online personas is that there is some form of transparency about which person or organisation is managing the different online personas. That includes where there are multiple “child” online personas like Websites operated by a “parent” online persona like an organisation. This practice comes in to being where online personas like email addresses and microsites (small Websites with specific domain names) are created for a particular campaign but aren’t torn down after that campaign.

As well, it includes what online personas are used for what kind of communications. This includes what is written on that “blue-ticked” social-media page or the online addresses that are written on business cards or literature you had out to the public.

Such public-communications mandates will also be required under election-oversight or fair-trading legislation so people know who is behind the messaging and these are important if it is issues-based rather than candidate-based. If an individual is pushing a particular message under their own name, they will have to state whether an entity is paying or encouraging them to advance the message.

This is due to most of us becoming conscious of online messaging from questionable sources. It is thanks to the popular concern about fake news and disinformation and its impact on elections since 2016 thanks to the Brexit referendum and Donald Trump’s presidential victory in the USA. It is also due to the rise of the online influencer culture where brands end up using big-time and small-time celebrities and influencers to push their products, services and messages online.

Where to go now that Elon Musk has taken over Twitter

Recently Elon Musk, the founder of Tesla and SpaceX, had bought out Twitter.

This takeover has been seen not as the kind of takeover where one wants to invest in a company but more of a political move. It came about in the runup to the 2002 Midterm elections in the USA, an election cycle that impacts members of Congress and significant state-level officials like governors and secretaries of state.

This is because this Midterm election cycle is a “do-or-die” moment for American democracy due to whether state officials or members of Congress that support Donald Trump and his election-denial rhetoric come in to power, with it being the first Midterms after the January 6 2021 insurrection on the Capitol which was about denying the legitimate result of the 2020 Presidential election.

The goal of this takeover was to convert Twitter in to a so-called “free-speech” social media platform like Parler, Gab or TruthSocial including to reinstate Donald Trump’s Twitter presence. This included the laying off of at least 4000 staff especially those involved in content moderation.

Here, Twitter has lost it as far as brand-safety and social respect is concerned with a significant number of household names removing their advertising or online presence from Twitter. As well, increasingly most of us are considering or taking steps to limit our presence on or remove ourselves from Twitter.

As well, this takeover has ended up in a spat between Elon Musk and Apple about the possibility of Apple removing the Twitter native mobile app from the iOS App Store. This is part of Apple’s effort to make the iOS App Store a clean bouse with content and apps that are fit for work and the family home. Lately, this has manifested in Apple destroying their Twitter account and removing its posts.

Competing social platforms

Facebook, Instagram, LinkedIn and Hive Social

The Meta-run social-media platforms i.e. Facebook and Instagram are acquiring new appeal as a business-to-consumer social-media presence. This is in addition to LinkedIn acquiring a stronger relevance in the business-to-business space. It is because these social networks are maintaining some form of proper content moderation that keeps them brand-safe and with some form of social licence.

For example, these platforms are being used by brands, public figures and the like as a means to distribute information “from the horse’s mouth” like press releases. This is in addition to them buying space on them to run their campaigns. Similarly, the established media are maintaining their presence on these platforms, typically as an “on-platform” presence for their news services.

Another network being put on the map is Hive Social which is being run as an alternative to Twitter with the same user experience. This is yet another platform with a centralised user experience but is facing some early problems due to its success as a Twitter alternative. Here, you may find that the service availability may not be strong and there will be some security issues.

Mastodon and the Fediverse

Another platform that has gained a lot of heat over the last few weeks is Mastodon. This is a decentralised Twitter-style social network where each “Instance” server works similar to a small bar or café where the staff have the finger on the pulse as far as the patrons are concerned. But each Mastodon Instance is linked to each other via the Fediverse which works in a similar way to email.

The Fediverse uses the ActivityPub publish-and-subscribe protocol and relies on interconnected servers and decentralised networking protocols. It is used by Mastodon and other services like PeerTube and Pieroma. In this space, each server for a platform is called an Instance and these link or “federate” with other servers to give the appearance of a large social network. But the Instance owner has the upper hand on what goes on in that Instance server.

These setups could also be seen as being similar to the bulletin-board systems that existed before the Internet was popular where most of them were interconnected using FidoNet as a means to store and forward messages and emails between the BBS systems.

When you create an account on a Mastodon Instance, you can add a link to a Website you run and this is used as a way to authenticate you. But you also have to add a link on your Website to your Mastodon presence for you to be authenticated, which then leads to a blue tick.

At the moment, there is support for only one user account per Mastodon Instance server so you can’t really run a “private” and a “public” account on the same Instance. It could work for people who use a particular Mastodon Instance associated with their work for public-facing posts as well as a private account for personal posts on a community Mastodon server. There doesn’t seem to be support for “group” accounts that can be operated by multiple users at the moment.

But with other open-source software efforts, Mastodon will be subject to continual tweaks and revisions to bring it to what people will want out of it. There may also be activity taking place to improve the establishment of Mastodon Instance servers such as porting to popular business server environments or integration with business-computing account datasets.

Other technologies worth considering

Online forums and similar technologies

Old-school “pre-social-media” technologies like online forums of the phpBB or vBulletin kind, or email-list platforms like listservs may have to be used. As well, the group functionality offered by Facebook, WhatsApp, Viber, Signal and Telegram come in to their own here as a limited-circulation Twitter replacement.

Blogs and news Websites

The traditional blog and the regularly-up;dated news Website or “update page” are becoming more relevant in this time. Here, these will be augmented with an RSS Webfeed or an email update offered by the site that comes out on a regular basis.

What can organisations, content authors and public figures do?

Organisations, content authors and public figures can keep a Website alive with the latest information if they aren’t already doing this. This would work really well with a blog or news page that is always up-to-date and these resources are best augmented with at least one RSS Webfeed that reflects the updates that are made.

The RSS Webfeed can be used to feed a reputable email-publishing platform like Feedblitz or Mailchimp so that people get the updates in their email inbox. Your LinkedIn, Facebook, Instagram or other brand-safe social-media presences can come in to their own here as well when you post a link to your latest posts there and are worth maintaining. As well, you could consider setting up shop on Hive Social which is becoming a viable alternative to Twitter.

Small-time operators should work with a Webhost that offers a range of online services at reasonable prices. These should include email, Website hosting and hosting one or two online services in a secure manner.

If you can, you may have to investigate creating a business-wide Mastodon instance. This is about having your own space that you control and is something that your IT staff or Webhost can offer, especially if they are familiar with Linux. Here, you could have usernames that reflect your workgroups or staff who want to have a public Mastodon account.

Let’s not forget creating online forums using the likes of bbPress, phpBB or vBulletin for your company or industry. Even vertical-market software that suits your organisation’s type or the industry it works in like religion or education could come in to its own.

Conclusion

The takeover of Twitter by Elon Musk as a political affair is showing that there is the risk of online services falling in to the wrong hands. Here, an emphasis is being placed on a plurality of social media and other online services that can be moderated to preserve sanity on the Internet.

What is prebunking in the context of news accuracy?

Facebook login page

Prebunking is used to rebut potential disinformation campaigns on social media

As you hear about how different entities are managing fake news and disinformation, you may hear of “prebunking” in the war against these disinformation campaigns. Here, it is about making sure the correct and accurate information gets out first before falsehoods gain traction.

Fact-checkers operated by newsrooms and universities and engaged by media outlets and respectable online services typically analyse news that comes their way to see if it is factual and truthful. One of the primary tasks they do is to “debunk” or discredit what they have found to be falsehoods by publishing information about rebuffs the falsehood.

But the war against fake news and disinformation is also taking another approach by dealing with potential disinformation and propaganda in a pre-emptive manner.

Here, various organisations like newsrooms, universities or government agencies will anticipate and publish a line of disinformation that is likely to be published while concurrently publishing material that refutes the anticipated falsehood. It may also be about rebutting possible information manipulation or distortion of facts by publishing material that carries the accurate information.

This process is referred to as “prebunking” rather than debunking because of it forewarning the general public about possible falsehoods or information manipulation. It is also couched in terms analogous to inoculation or vaccination because a medical vaccine like one of those COVID jabs establishes a defence in your body against a pending infection thus making it hard for that infection to take hold in your body.

Prebunking is seen as a “heads-up” alert to a potential disinformation campaign so we can be aware of and take action against it. One way of describing this as prebunking as the “guardrail at the top of the cliff” and debunking as the ”ambulance at the bottom of the cliff”. These efforts are also a way to sensitise us to the techniques used to have us believe distorted messaging and disinformation campaigns by highlighting fearmongering, scapegoating and pandering to our base instincts, emotions and biases.

Prebunking efforts are typically delivered as public-service announcements or posts that are run on Social Web platforms by government entities, advocacy organisations or similar groups. Other media platform like television or radio public-service announcements can be used to present prebunking information. Where a post or announcement leads to any online resources, this will be in the form of a simple-language landing page that even provides a FAQ (frequently-asked questions) article about the topic and the falsehoods associated with it. Examples of this in Australia are the state and federal election authorities who have been running posts in social media platforms to debunk American-style voter-suppression disinformation that surfaces around Australian elections.

Such campaigns are in response to the disinformation risks that are presented by the 24-hour news cycle and the Social Web. In a lot of cases, these campaigns are activated during a season of high disinformation risk like an election or referendum. Sometimes a war in another part of the world may be the reason to instigate a prebunking campaign because this is where the belligerent states will activate their propaganda machines to make themselves look good in the eyes of the world.

But the various media-literacy efforts ran by public libraries, educational institutions, public-service broadcasters and the like are also prebunking efforts in their own right. For example the ABC’s “Media Watch” exposes where traditional and social media are at risk of information manipulation or spreading disinformation with this show, for example, highlighting tropes used by media organisations to manipulate readers or viewers. Or the ABC running a “Behind The News” video series during 2020 about media literacy in the era of fake news and disinformation with “The Drum” cross-promoting it as something for everyone to see. A similar video-lecture series and resource page has also been made available by the University of Washington on this topic.

What prebunking is all about is to disseminate correct and accurate information relevant to an issue where there is a strong likelihood of misinformation or disinformation in order to have people aware of the proper facts and what is likely to go around.

Australian Electoral Commission takes to Twitter to rebut election disinformation

Articles Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Don’t Spread Disinformation on Twitter or the AEC Will Roast You (gizmodo.com.au)

Federal Election 2022: How the AEC Is Preparing to Combat Misinformation (gizmodo.com.au)

From the horse’s mouth

Australian Electoral Commission

AEC launches disinformation register ahead of 2022 poll (Press Release)

Previous coverage on HomeNetworking01.info

Being cautious about fake news and misinformation in Australia

My Comments

This next 18 months is to be very significant as far as general elections in Australia go. It is due to a Federal election and state elections in the most populous states taking place during that time period i.e. this year has the Federal election having to take place by May and the Victorian state election taking place by November, then the New South Wales state election taking place by March 2023.

Democracy sausages prepared at election day sausage sizzle


Two chances over the next 18 months to benefit from the democracy sausage as you cast your vote
Kerry Raymond, CC BY 4.0 <https://creativecommons.org/licenses/by/4.0>, via Wikimedia Commons

Oh yeah, more chances to eat those democracy sausages available at that school’s sausage sizzle after you cast that vote. But the campaign machine has started up early this year at the Federal level with United Australia Party ads appearing on commercial TV since the Winter Olympics, yard signs from various political parties appearing in my local neigbbourhood and an independent candidate for the Kooyong electorate running ads online through Google AdSense with some of those ads appearing on HomeNetworking01.info. This is even before the Governor General had served the necessary writs to dissolve the Federal Parliament and commence the election cycle.

Ged Kearney ALP candidate yard sign

The campaigns are underway even before the election is called

This season will be coloured with the COVID coronavirus plague and the associated vaccination campaigns, lockdowns and other public-health measures used to mitigate this virus. This will exacerbate Trump-style disinformation campaigns affecting the Australian electoral process, especially from anti-vaccination / anti-public-health-measure groups.

COVID will also exacerbate issues regarding access to the vote in a safe manner. This includes dealing with people who are isolated or quarantined due to them or their household members being struck down by the disease or allowing people on the testing and vaccination front lines to cast their vote. Or it may be about running the polling booths in a manner that is COVID-safe and assures the proper secret ballot.

There is also the recent flooding that is taking place in Queensland and NSW with it bringing about questions regarding access to the vote for affected communities’ and volunteers helping those communities. All these situations would depend on people knowing where and how to cast “convenience votes” like early or postal votes, or knowing where the nearest polling booth is especially with the flooding situation rendering the usual booths in affected areas out of action.

The Australian Electoral Commission who oversees elections at a federal level have established a register to record fake-news and disinformation campaigns that appear online to target Australians. They will also appear at least on Twitter to debunk disinformation that is swirling around on that platform and using common hashtags associated with Australian politics and elections.

Add to this a stronger wider “Stop And Consider” campaign to encourage us to be mindful about what we see, hear or read regarding the election. This is based on their original campaign ran during the 2019 Federal election to encourage us to be careful about what we share online. Here, that was driven by that Federal election being the first of its kind since we became aware of online fake-news and disinformation campaigns and their power to manipulate the vote.

There will be a stronger liasion with the AEC and the online services in relation to sharing intelligence about disinformation campaigns.

But the elephant in the room regarding election safety is IT security and cyber safety for a significant number of IT systems that would see a significant amount of election-related data being created or modified through this season.

Service Victoria contact-tracing QR code sign at Fairfield Primary School

Even the QR-code contact-tracing platforms used by state governments as part of their COVID management efforts have to be considered as far as IT security for an election is concerned – like this one at a school that is likely to be a polling place

This doesn’t just relate to the electoral oversight bodies but any government, media or civil-society setup in place during the election.

That would encompass things ranging from State governments wanting to head towards fully-electronic voter registration and electoral-roll mark-off processes, through the politicians and political parties’ IT that they use for their business process, the state-government QR-code contact tracing platforms regularly used by participants during this COVID-driven era, to the IT operated by the media and journalists themselves to report the election. Here, it’s about the safety of the participants in the election process, the integrity of the election process and the ability for voters to make a proper and conscious choice when they cast their vote.

Such systems have a significant number of risks associated with their data such as cyber attacks intended to interfere with or exfiltrate data or slow down the performance of these systems. It is more so where the perpetrators of this activity extends to adverse nation states or organised crime anywhere in the world. As well, interference with these IT systems is used as a way to create and disseminate fake news, disinformation and propaganda.

But the key issue regarding Australia’s elections being safe from disinformation and election interference is for us to be media-savvy. That includes being aware of material that plays on your emotions; being aware of bias in media and other campaigns; knowing where sources of good-quality and trustworthy news are; and placing importance on honesty, accuracy and ethics in the media.

Here, it may be a good chance to look at the “Behind The News” media-literacy TV series the ABC produced during 2020 regarding the issue of fake news and disinformation.  Sometimes you may also find that established media, especially the ABC and SBS or the good-quality newspapers may be the way to go for reliable election information. Even looking at official media releases “from the horse’s mouth” at government or political-party Websites may work as a means to identify exaggeration that may be taking place.

Having the various stakeholders encourage media literacy and disinformation awareness, along with government and other entities taking a strong stance with cyber security can be a way to protect this election season.

YouTube to examine further ways to control misinformation

Article

YouTube recommendation list

YouTube to further crack down on misinformation using warning screens and other strategies

YouTube Eyes New Ways to Stop Misinformation From Spreading Beyond Its Reach – CNET

From the horse’s mouth

YouTube

Inside Responsibility: What’s next on our misinfo efforts (Blog Post)

My Comments

YouTube’s part in controlling the spread of repeated disinformation has been found to be very limited in some ways.

This was focused on managing accounts and channels (collections of YouTube videos submitted by a YouTube account holder and curated by that holder) in a robust manner like implementing three-strikes policies when repeated disinformation occurs. It extended to managing the content recommendation engine in order to effectively “bury” that kind of content from end-users’ default views.

But new other issues have come up in relation to this topic. One of these is to continually train the artificial-intelligence / machine-learning subsystems associated with how YouTube operates with new data that represents newer situations. This includes the use of different keywords and different languages.

Another approach that will fly in the face of disinformation purveyors is to point end-users to authoritative resources relating to the topic at hand. This will typically manifest in lists of hyperlinks to text and video resources from sources of respect when there is a video or channel that has questionable material.

But a new topic or new angle on an existing topic can yield a data-void where there is scant or no information on the topic from respectable resources. This can happen when there is a fast-moving news event and is fed by the 24-hour news cycle.

Another issue is where someone creates a hyperlink to or embeds a YouTube video in their online presence. This is a common way to put YouTube video content “on the map” and can cause a video to go viral by acquiring many views. In some cases like “communications-first” messaging platforms such as SMS/MMS or instant-messaging, a preview image of the video will appear next to a message that has a link to that video.

Initially YouTube looked at the idea of preventing a questionable resource from being shared through the platform’s user interface. But questions were raised about this including limiting a viewer’s freedoms regarding taking the content further.

The issue that wasn’t even raised is the fact that the video can be shared without going via YouTube’s user interface. This can be through other means like copying the URL in the address bar if viewing on a regular computer or invoking the “share” intent on modern desktop and mobile operating systems to facilitate taking it further. In some operating systems, that can extend to printing out material or “throwing” image or video material to the large screen TV using a platform like Apple TV or Chromecast. Add to this the fact that a user will want to share the video with others as part of academic research or news report.

Another approach YouTube is looking at is based on an age-old approach implemented by responsible TV broadcasters or by YouTube with violent age-restricted or other questionable content. That is to show a warning screen, sometimes accompanied with an audio announcement, before the questionable content plays. Most video-on-demand services will implement an interactive approach at least in their “lean-forward” user interfaces where the viewer has to assent to the warning before they see any of that content.

In this case, YouTube would run a warning screen regarding the existence of disinformation in the video content before the content plays. Such an approach would make us aware of the situation and act as a “speed bump” against continual consumption of that content or following through on hyperlinks to such content.

Another issue YouTube is working on is keeping its anti-disinformation efforts culturally relevant. This scopes in various nations’ historical and political contexts, whether a news or information source is an authoritative independent source or simply a propaganda machine, fact-checking requirements, linguistic issues amongst other things. The historical and political issue could include conflicts that had peppered the nation’s or culture’s history or how the nation changed governments.

Having support for relevance to various different cultures provides YouTube’s anti-disinformation effort with some “look-ahead” sense when handling further fake-news campaigns. It also encompasses recognising where a disinformation campaign is being “shaped” to a particular geopolitical area with that area’s history being weaved in to the messaging.

But whatever YouTube is doing may have limited effect if the purveyors of this kind of nonsense use other services to host this video content. This can manifest in alternative “free-speech” video hosting services like BitChute, DTube or PeerTube. Or it can be the content creator hosting the video content on their own Website, something that becomes more feasible as the kind of computing power needed for video hosting at scale becomes cheaper.

What is being raised is YouTube using their own resources to limit the spread of disinformation that is hosted on their own servers rather than looking at this issue holistically. But they are looking at issues like the ever-evolving message of disinformation that adapts to particular cultures along with using warning screens before such videos play.

This is compared to third-party-gatekeeper approaches like NewsGuard (HomeNetworking01.info coverage) where an independent third party scrutinises news content and sites then puts their results in a database. Here various forms of logic can work from this database to deny advertising to a site or cause a warning flag to be shown when users interact with that site.

But by realising that YouTube is being used as a host for fake news and disinformation videos, they are taking further action on this issue. This is even though Google will end up playing cat-and-mouse when it comes to disinformation campaigns.

The Spotify disinformation podcast saga could give other music streaming services a chance

Articles

Spotify Windows 10 Store port

Spotify dabbling in podcasts and strengthening its ties with podcasters is placing it at risk of carrying anti-vaxx and similar disinformation

Joni Mitchell joins Neil Young’s Spotify protest over anti-vax content | Joni Mitchell | The Guardian

Nils Lofgren Pulls Music From Spotify – Billboard

My Comments

Spotify has over the last two years jumping on the podcast-hosting wagon even though they were originally providing music on demand.

But just lately they were hosting the podcast output of Joe Rogan who is known for disinformation about COVID vaccines. They even strengthened their business relationship with Joe Rogan using the various content monetisation options they offer and giving it platform-exclusive treatment.

There has been social disdain about Spotify’s business relationship with Joe Rogan due to social responsibility issues relating to disinformation about essential issues such as vaccination. Neil Young and Joni Mitchell had pulled their music from this online music service and an increasing number of their fans are discontinuing business with Spotify. Now Nils Lofgren, the guitarist from the E Street Band associated with Bruce Springsteen is intending to pull music he has “clout” over from Spotify and encourages more musicians to do so.

Tim Burrowes, who founded Mumbrella, even said in his Unmade blog about the possibility of Spotify being subject to what happened with Sky News and Radio 2GB during the Alan Jones days. That was where one or more collective actions took place to drive advertisers to remove their business from these stations. This could be more so where companies have to be aware of brand safety and social responsibility when they advertise their wares.

In some cases, Apple, Google and Amazon could gain traction with their music-on-demand services. But on the other hand, Deezer, Qobuz and Tidal could gain an increased subscriber base especially where there is a desire to focus towards European business or to deal with music-focused media-on-demand services rather than someone who is running video or podcast services in addition.

There are questions about whether a music-streaming service like Spotify should be dabbling in podcasts and spoken-word content. That includes any form of “personalised-radio” services where music, advertising and spoken-word content presented in a manner akin to a local radio station’s output.

Then the other question that will come about is the expectation for online-audio-playback devices like network speakers, hi-fi network streamers and Internet radios. This would extend to other online-media devices like smart TVs or set-top boxes. Here, it is about allowing different audio-streaming services to be associated with these devices and assuring a simplified consistent user experience out of these services for the duration of the device’s lifespan.

That includes operation-by-reference setups like Spotify Connect where you can manage the music from the online music service via your mobile device, regular computer or similar device. But the music plays through your preferred set of speakers or audio device and isn’t interrupted if you make or take a call, receive a message or play games on your mobile device.

What has come about is the content hosted on an online-media platform or the content creators that the platform gives special treatment to may end up affecting that platform’s reputation. This is especially where the content creator is involved in fake news or disinformation.

Being aware of astroturfing as an insidious form of disinformation

Article

Astroturfing more difficult to track down with social media – academic | RNZ News

My Comments

An issue that is raised in the context of fake news and disinformation is a campaign tactic known as “astroturfing”. This is something that our online life has facilitated thanks to easy-to-produce Websites on affordable Web-hosting deals along with the Social Web.

I am writing about this on HomeNetworking01.info due to astroturfing as another form of disinformation that we are needing to be careful of in this online era.

What is astroturfing?

Astroturfing is organised propaganda activity intended to create a belief of popular grassroots support for a viewpoint in relationship to a cause or policy. This activity is organised by one or more large organisations with it typically appearing as the output of concerned individuals or smaller community organisations such as a peak body for small businesses of a kind.

But there is no transparency about who is actually behind the message or the benign-sounding organisations advancing that message. Nor is there any transparency about the money flow associated with the campaign.

The Merrian-Webster Dictionary which is the dictionary of respect for the American dialect of the English language defines it as:

organized activity that is intended to create a false impression of a widespread, spontaneously arising, grassroots movement in support of or in opposition to something (such as a political policy) but that is in reality initiated and controlled by a concealed group or organization (such as a corporation).

The etymology for this word comes about as a play on words in relation to the “grassroots” expression. It alludes to the Astroturf synthetic turf implemented initially in the Astrodome stadium in Houston in the USA., with the “Astroturf” trademark becoming a generic trademark for synthetic sportsground turf sold in North America.

This was mainly practised by Big Tobacco to oppose significant taxation and regulation measures against tobacco smoking, but continues to be practised by entities whose interests are against the public good.

How does astroturfing manifest?

It typically manifests as one or more benign-sounding community organisations that appear to demonstrate popular support for or against a particular policy. It typically affects policies for the social or environmental good where there is significant corporate or other “big-money” opposition to these policies.

The Internet era has made this more feasible thanks to the ability to create and host Websites for cheap. As online forums and social media came on board, it became feasible to set up multiple personas and organisational identities on forums and social-media platforms to make it appear as though many people or organisations are demonstrating popular support for the argument. It is also feasible to interlink Websites and online forums or Social-Web presences by posting a link from a Website or blog in a forum or Social-Web post or having articles on a Social Web account appear on one’s Website.

The multiple online personas created by one entity for this purpose of demonstrating the appearance of popular support are described as “sockpuppet” accounts. This is in reference to children’s puppet shows where two or three puppet actors use glove puppets made out of odd socks and can manipulate twice the number of characters as each actor. This can happen synchronously with a particular event that is in play, be it the effective date of an industry reform or set of restrictions; a court case or inquiry taking place; or a legislature working on an important law.

An example of this that occurred during the long COVID-19 lockdown that affected Victoria last year where the “DanLiedPeopleDied” and “DictatorDan” hashtags were manipulated on Twitter to create a sentiment of popular distrust against Dan Andrews.  Here it was identified that a significant number of the Twitter accounts that drove these hashtags surfaced or changed their behaviour synchronously to the lockdown’s effective period.

But astroturfing can manifest in to offline / in-real-life activities like rallies and demonstrations; appearances on talkback radio; letters to newspaper editors, pamphlet drops and traditional advertising techniques.

Let’s not forget that old-fashioned word-of-mouth advertising for an astroturfing campaign can take place here like over the neighbour’s fence, at the supermarket checkout or around the office’s water cooler.

Sometimes the online activity is used to rally for support for one or more offline activities or to increase the amount of word-of-mouth conversation on the topic. Or the pamphlets and outdoor advertising will carry references to the campaign’s online resources so people can find out more “from the horse’s mouth”. This kind of material used for offline promotion can be easily and cheaply produced using “download-to-print” resources, print and copy shops that use cost-effective digital press technology, firms who screen-print T-shirts on demand from digital originals amongst other online-facilitated technologies.

An example of this highlighted by Spectrum News 1 San Antonio in the USA was the protest activity against COVID-19 stay-at-home orders in that country. This was alluding to Donald Trump and others steering public opinion away from a COVID-safe USA.

This method of deceit capitalises on popular trust in the platform and the apparently-benign group behind the message or appearance of popular support for that group or its message. As well, astroturfing is used to weaken any true grassroots support for or against the opinion.

How does astroturfing affect media coverage of an issue?

The easily-plausible arguments tendered by a benign-sounding organisation can encourage journalists to “go with the flow” regarding the organisation’s ideas. It can include treating the organisation’s arguments at face value for a supporting or opposing view on the topic at hand especially where they want to create a balanced piece of material.

This risk is significantly increased in media environments where there isn’t a culture of critical thinking with obvious examples being partisan or tabloid media. Examples of this could be breakfast/morning TV talk shows on private free-to-air TV networks or talkback radio on private radio stations.

But there is a greater risk of this occurring while there is increasingly-reduced investment in public-service and private news media. Here, the fear of newsrooms being reduced or shut down or journalists not being paid much for their output can reduce the standard of journalism and the ability to perform proper due diligence on news sources.

There is also the risk of an astroturfing campaign affecting academic reportage of the issue. This is more so where the student doesn’t have good critical-thinking and research skills and can be easily swayed by spin. It is more so with secondary education or some tertiary education situations like vocational courses or people at an early stage in the undergraduate studies.

How does astroturfing affect healthy democracies

All pillars of government can and do fall victim to astroturfing. This can happen at all levels of government ranging from local councils through state or regional governments to the national governments.

During an election, an astroturfing campaign can be used to steer opinion for or against a political party or candidate who is standing for election. In the case of a referendum, it can steer popular opinion towards or against the questions that are the subject of the referendum. This is done in a manner to convey the veneer of popular grassroots support for or against the candidate, party or issue.

The legislature is often a hotbed of political lobbying by interest groups and astroturfing can be used to create a veneer of popular support for or against legislation or regulation of concern to the interest group. As well, astroturfing can be used a a tool to place pressure on legislature members to advance or stall a proposed law and, in some cases, force a government out of power where there is a stalemate over that law.

The public-service agencies of the executive government who have the power to permit or veto activity are also victims of astroturfing. This comes in the form of whether a project can go ahead or not; or whether a product is licensed for sale within the jurisdiction. It can also affect the popular trust in any measures that officials in the executive government execute.

As well, the judiciary can be tasked with handling legal actions launched by pressure groups who use astroturfing to  create a sense of popular support to revise legislation or regulation. It also includes how jurors are influenced in any jury trial or which judges are empanelled in a court of law, especially a powerful appellate court or the jurisdiction’s court of last resort.

Politicians, significant officials and key members of the judiciary can fall victim to character assassination campaigns that are part of one or more astroturfing campaigns. This can affect continual popular trust in these individuals and can even affect the ability for them to live or conduct their public business in safety.

Here, politicians and other significant government officials are increasingly becoming accessible to the populace. It is being facilitated by themselves maintaining Social-Web presence using a public-facing persona on the popular social-media platforms, with the same account-name or “handle” being used on the multiple platforms. In the same context, the various offices and departments maintain their social-Web presence on the popular platforms using office-wide accounts. This is in addition to other online presences like the ministerial Web pages or public-facing email addresses they or the government maintain.

These officials can be approached by interest groups who post to the official’s Social-Web presence. Or a reference can be created to the various officials and government entities through the use of hashtags or mentions of platform-native account names operated by these entities when someone creates a Social Web post about the official or issue at hand. In a lot of cases, there is reference to sympathetic journalists and media organisations in order to create media interest.

As well, one post with the right message and the right mix of hashtags and referenced account names can be viewed by the targeted decision makers and the populace at the same time. Then people who are sympathetic to that post’s message end up reposting that message, giving it more “heat”.

Here, the Social Web is seen as providing unregulated access to these powerful decision-makers. That is although the decision-makers work with personal assistants or similar staff to vet content that they see. As well, there isn’t any transparency about who is posting the content that references these officials i.e. you don’t know whether it is a local constituent or someone pressured by an interest group.

What can be done about it

The huge question here is what can be done about astroturfing as a means of disinformation.

A significant number of jurisdictions implement attribution requirements for any advertising or similar material as part of their fair-trading, election-oversight, broadcasting, unsolicited-advertising or similar laws. Similarly a significant number of jurisdictions implement lobbyist regulation in relationship to who has access to the jurisdiction’s politicians. As outlined in the RNZ article that I referred to, New Zealand is examining astroturfing in the context of whether they should regulate access to their politicians.

But most of these laws regulate what goes on within the offline space within the jurisdiction that they pertain to. It could become feasible for foreign actors to engage in astroturfing and similar campaigns from other territories across the globe using online means without any action being taken.

The issue of regulating lobbyist access to the jurisdiction’s politicians or significant officials can raise questions. Here it could be about whether the jurisdiction’s citizens have a continual right of access to their elected government or not. As well, there is the issue of assuring governmental transparency and a healthy dialogue with the citizens.

The 2016 fake-news crisis which highlighted the distortion of the 2016 US Presidential Election and UK Brexit referendum became a wake-up call regarding how the online space can be managed to work against disinformation.

Here, Silicon Valley took on the task of managing online search engines, social-media platforms and online advertising networks to regulate foreign influence and assure accountability when it comes to political messaging in the online space. This included identity verification of advertiser accounts or keeping detailed historical records of ads from political advertisers on ad networks or social media or clamping down on coordinated inauthentic behaviour on social media platforms.

In addition to this, an increasingly-large army of “fact-checkers” organised by credible newsrooms, universities and similar organisations appeared. These groups researched and verified claims which were being published through the media or on online platforms and would state whether they are true or false based on their research.

What we can do is research further and trust our instincts when it comes to questionable claims that come from apparently-benign organisations. Here we can do our due diligence and check for things like how long an online account has been in operation for, especially if it is synchronous to particular political, regulatory or similar events occurring or being on the horizon.

Here you have to look out for behaviours in the online or offline content like:

  • Inflammatory or manipulative language that plays on your emotions
  • Claims to debunk topic-related myths that aren’t really myths
  • Questioning or pillorying those exposing the wrongdoings core to the argument rather the actual wrongdoings
  • A chorus of the same material from many accounts

Conclusion

We need to be aware of astroturfing as another form of disinformation that is prevalent in the online age. Here it can take in people who are naive and accept information at face value without doing further research on what is being pushed.

The US now takes serious action about electoral disinformation

Article

Now Uncle Sam is taking action on voter suppression

US arrests far-right Twitter troll for 2016 election interference | Engadget

From the horse’s mouth

United States Department Of Justice

Social Media Influencer Charged with Election Interference Stemming from Voter Disinformation Campaign (Press Release)

My Comments

Previously, when I have talked about activities that social media companies have undertaken regarding misinformation during election cycles, including misinformation to suppress voter participation, I have covered what these companies in the private sector are doing.

But I have also wanted to see a healthy dialogue between the social-media private sector and public-sector agencies responsible for the security and integrity of the elections. This is whether they are an election-oversight authority like the  FEC in the USA or the AEC in Australia; a broadcast oversight authority like the FCC in the USA or OFCOM in the UK; or a consumer-rights authority like the FTC in the USA or the ACCC in Australia. Here, these authorities need to be able to know where the proper communication of electoral information is at risk so they can take appropriate education and enforcement action regarding anything that distorts the election’s outcome.

Just lately, the US government arrested a Twitter troll who had been running information on his Twitter feed to dissuade Americans from participating properly and making their vote count in the 2016 Presidential Election. Here, the troll was suggesting that they don’t attend the local polling booths but cast their vote using SMS or social media, which isn’t considered a proper means of casting your vote in the USA. Twitter had banned him and a number of alt-right figureheads that year for harrassment.

These charges are based on a little-known US statute that proscribes activity that denies or dissuades a US citizen’s right to exercise their rights under that country’s Constitution. That includes the right to cast a legitimate vote at an election.

But this criminal case could be seen as a means to create a “conduit” between social media platforms and the public sector to use the full extent of the law to clamp down on disinformation and voter suppression using the Web. I also see it as a chance for public prosecutors to examine the laws of the land and use them as a tool to work against the fake news and disinformation scourge.

This is a criminal matter before the courts of law in the USA and the defendent is presumed innocent unless he is found guilty in a court of law.

Gizmodo examines the weaponisation of a Twitter hashtag

Article

How The #DanLiedPeopleDied Hashtag Reveals Australia’s ‘Information Disorder’ Problem | Gizmodo

My Comments

I read in Gizmodo how an incendiary hashtag directed against Daniel Andrews, the State Premier of Victoria in Australia, was pushed around the Twittersphere and am raising this as an article. It is part of keeping HomeNetworking01.info readers aware about disinformation tactics as we increasingly rely on the Social Web for our news.

What is a hashtag

A hashtag is a single keyword preceded by a hash ( # ) symbol that is used to identify posts within the Social Web that feature a concept. It was initially introduced in Twitter as a way of indexing posts created on that platform and make them easy to search by concept. But an increasing number of other social-Web platforms have enabled the use of hashtags for the same purpose. They are typically used to embody a slogan or idea in an easy-to-remember way across the social Web.

Most social-media platforms turn these hashtags in to a hyperlink that shows a filtered view of all posts featuring that hashtag. They even use statistical calculations to identify the most popular hashtags on that platform or the ones whose visibility is increasing and present this in meaningful ways like ranked lists or keyword clouds.

How this came about

Earlier on in the COVID-19 coronavirus pandemic, an earlier hashtag called #ChinaLiedPeopleDied was working the Social Web. This was underscoring a concept with a very little modicum of truth that the Chinese government didn’t come clear about the genesis of the COVID-19 plague with its worldwide death toll and their role in informing the world about it.

That hashtag was used to fuel Sinophobia hatred against the Chinese community and was one of the first symptoms of questionable information floating around the Social Web regarding COVID-19 issues.

Australia passed through the early months of the COVID-19 plague and one of their border-control measures for this disease was to have incoming travellers required to stay in particular hotels for a fortnight before they can roam around Australia as a quarantine measure. The Australian federal government put this program in the hands of the state governments but offered resources like the use of the military to these governments as part of its implementation.

The second wave of the COVID-19 virus was happening within Victoria and a significant number of the cases was to do with some of the hotels associated with the hotel quarantine program. This caused a very significant death toll and had the state government run it to a raft of very stringent lockdown measures.

A new hashtag called #DanLiedPeopleDied came about because it was deemed that the Premier, Daniel Andrews, as the head of the state’s executive government wasn’t perceived to have come clear about any and all bungles associated with its management of the hotel quarantine program.

On 14 July 2020, this hashtag first appeared in a Twitter account that initially touched on Egyptian politics and delivered its posts in the Arabic language. But it suddenly switched countries, languages and political topics, which is one of the symptoms of a Social Web account existing just to peddle disinformation and propaganda.

The hashtag had laid low until 12 August when a run of Twitter posts featuring it were delivered by hyper-partisan Twitter accounts. This effort, also underscored by newly-created or suspicious accounts that existed to bolster the messaging, was to make it register on Twitter’s systems as a “trending” hashtag.

Subsequently a far-right social-media influencer with a following of 116,000 Twitter accounts ran a post to keep the hashtag going. There was a lot of very low-quality traffic featuring that hashtag or its messaging. It also included a lot of low-effort memes being published to drive the hashtag.

The above-mentioned Gizmodo article has graphs to show how the hashtag appeared over time which is worth having a look at.

What were the main drivers

But a lot of the traffic highlighted in the article was driven by the use of new or inauthentic accounts which aren’t necessarily “bots” – machine operated accounts that provide programmatic responses or posts. Rather this is the handiwork of trolls or sockpuppets (multiple online personas that are perceived to be different but say the same thing).

As well, there was a significant amount of “gaming the algorithm” activity going on in order to raise the profile of that hashtag. This is due to most social-media services implementing algorithms to expose trending activity and populate the user’s main view.

Why this is happening

Like with other fake-news, disinformation and propaganda campaigns, the #DanLiedPeopleDied hashtag is an effort to sow seeds of fear, uncertainty and doubt while bringing about discord with information that has very little in the way of truth. As well the main goal is to cause a popular distrust in leadership figures and entities as well as their advice and efforts.

In this case, the campaign was targeted at us Victorians who were facing social and economic instability associated with the recent stay-at-home orders thanks to COVID-19’s intense reappearance, in order to have us distrust Premier Dan Andrews and the State Government even more. As such, it is an effort to run these kind of campaigns to people who are in a state of vulnerability, when they are less likely to use defences like critical thought to protect themselves against questionable information.

As I know, Australia is rated as one of the most sustainable countries in the world by the Fragile States Index, in the same league as the Nordic countries, Switzerland, Canada and New Zealand. It means that the country is known to be socially, politically and economically stable. But we can find that a targeted information-weaponisation campaign can be used to destabilise a country even further and we need to be sensitive to such tactics.

One of the key factors behind the problem of information weaponisation is the weakening of traditional media’s role in the dissemination of hard news. This includes younger people preferring to go to online resources, especially the Social Web, portals or news aggregator Websites for their daily news intake. It also includes many established newsrooms receiving reduced funding thanks to reduced advertising, subscription or government income, reducing their ability to pay staff to turn out good-quality news.

When we make use of social media, we need to develop a healthy suspicion regarding what is appearing. Beware of accounts that suddenly appear or develop chameleon behaviours especially when key political events occur around the world. Also be careful about accounts that “spam” their output with a controversial hashtag or adopt a “stuck record” mentality over a topic.

Conclusion

Any time where a jurisdiction is in a state of turmoil is where the Web, especially the Social Web, can be a tool of information warfare. When you use it, you need to be on your guard about what you share or which posts you interact with.

Here, do research on hashtags that are suddenly trending around a social-media platform and play on your emotions and be especially careful of new or inauthentic accounts that run these hashtags.