Telstra has joined the large number of private-sector actors including its airline equivalent Qantas in running campaigns for us to get ourselves vaccinated against the COVID-119 coronavirus plague.
Here, Telstra is exploiting its position as a mobile telephony carrier to tackle the 5G mobile-broadband myths that are often run in the same breath as anti-vax myths. This is due to the fact that anti-vaccination conspiracy theorists run other themes like the harm caused by 5G mobile broadband and other non-ionisating radiation sources.
Mark Humphries who is the voice of this campaign, uses a comic line to underscore the way these conspiracy theorists approach you to spill all their nonsense. He even uses the same humour to play on these remarks so as to have you sort the truth out from the nonsense that they tell you.
Even that line “do your own research” that they quote is turned around to mean to do research from proper knowledgeable sources who are qualified to talk about these things.
But I like the fact that he comes at the vaccination issue from the same kind of approach used by those who peddle this disinformation which can often include people within our social circle.
Here, the proper information is that these COVID vaccines like both doses of the AstraZeneca vaccine which I had are delivered purely as a liquid to be injected using garden-variety medical-use syringes and needles. They have been tested for safety and efficacy before being approved and have nothing to do with 5G (or other non-ionising) radiation, microchips or magnetism.
Don’t fall for the nonsense! Get those proper vaccination jabs and stay safe!
An issue that is raised in the context of fake news and disinformation is a campaign tactic known as “astroturfing”. This is something that our online life has facilitated thanks to easy-to-produce Websites on affordable Web-hosting deals along with the Social Web.
I am writing about this on HomeNetworking01.info due to astroturfing as another form of disinformation that we are needing to be careful of in this online era.
What is astroturfing?
Astroturfing is organised propaganda activity intended to create a belief of popular grassroots support for a viewpoint in relationship to a cause or policy. This activity is organised by one or more large organisations with it typically appearing as the output of concerned individuals or smaller community organisations such as a peak body for small businesses of a kind.
But there is no transparency about who is actually behind the message or the benign-sounding organisations advancing that message. Nor is there any transparency about the money flow associated with the campaign.
organized activity that is intended to create a false impression of a widespread, spontaneously arising, grassroots movement in support of or in opposition to something (such as a political policy) but that is in reality initiated and controlled by a concealed group or organization (such as a corporation).
The etymology for this word comes about as a play on words in relation to the “grassroots” expression. It alludes to the Astroturf synthetic turf implemented initially in the Astrodome stadium in Houston in the USA., with the “Astroturf” trademark becoming a generic trademark for synthetic sportsground turf sold in North America.
This was mainly practised by Big Tobacco to oppose significant taxation and regulation measures against tobacco smoking, but continues to be practised by entities whose interests are against the public good.
How does astroturfing manifest?
It typically manifests as one or more benign-sounding community organisations that appear to demonstrate popular support for or against a particular policy. It typically affects policies for the social or environmental good where there is significant corporate or other “big-money” opposition to these policies.
The Internet era has made this more feasible thanks to the ability to create and host Websites for cheap. As online forums and social media came on board, it became feasible to set up multiple personas and organisational identities on forums and social-media platforms to make it appear as though many people or organisations are demonstrating popular support for the argument. It is also feasible to interlink Websites and online forums or Social-Web presences by posting a link from a Website or blog in a forum or Social-Web post or having articles on a Social Web account appear on one’s Website.
The multiple online personas created by one entity for this purpose of demonstrating the appearance of popular support are described as “sockpuppet” accounts. This is in reference to children’s puppet shows where two or three puppet actors use glove puppets made out of odd socks and can manipulate twice the number of characters as each actor. This can happen synchronously with a particular event that is in play, be it the effective date of an industry reform or set of restrictions; a court case or inquiry taking place; or a legislature working on an important law.
An example of this that occurred during the long COVID-19 lockdown that affected Victoria last year where the “DanLiedPeopleDied” and “DictatorDan” hashtags were manipulated on Twitter to create a sentiment of popular distrust against Dan Andrews. Here it was identified that a significant number of the Twitter accounts that drove these hashtags surfaced or changed their behaviour synchronously to the lockdown’s effective period.
But astroturfing can manifest in to offline / in-real-life activities like rallies and demonstrations; appearances on talkback radio; letters to newspaper editors, pamphlet drops and traditional advertising techniques.
Let’s not forget that old-fashioned word-of-mouth advertising for an astroturfing campaign can take place here like over the neighbour’s fence, at the supermarket checkout or around the office’s water cooler.
Sometimes the online activity is used to rally for support for one or more offline activities or to increase the amount of word-of-mouth conversation on the topic. Or the pamphlets and outdoor advertising will carry references to the campaign’s online resources so people can find out more “from the horse’s mouth”. This kind of material used for offline promotion can be easily and cheaply produced using “download-to-print” resources, print and copy shops that use cost-effective digital press technology, firms who screen-print T-shirts on demand from digital originals amongst other online-facilitated technologies.
An example of this highlighted by Spectrum News 1 San Antonio in the USA was the protest activity against COVID-19 stay-at-home orders in that country. This was alluding to Donald Trump and others steering public opinion away from a COVID-safe USA.
This method of deceit capitalises on popular trust in the platform and the apparently-benign group behind the message or appearance of popular support for that group or its message. As well, astroturfing is used to weaken any true grassroots support for or against the opinion.
How does astroturfing affect media coverage and academic discussion of an issue?
The easily-plausible arguments tendered by a benign-sounding organisation can encourage journalists to “go with the flow” regarding the organisation’s ideas. It can include treating the organisation’s arguments at face value for a supporting or opposing view on the topic at hand especially where they want to create a balanced piece of material.
This risk is significantly increased in media environments where there isn’t a culture of critical thinking with obvious examples being partisan or tabloid media. Examples of this could be breakfast/morning TV talk shows on private free-to-air TV networks or talkback radio on private radio stations.
But there is a greater risk of this occurring while there is increasingly-reduced investment in public-service and private news media. Here, the fear of newsrooms being reduced or shut down or journalists not being paid much for their output can reduce the standard of journalism and the ability to perform proper due diligence on news sources.
There is also the risk of an astroturfing campaign affecting academic reportage of the issue. This is more so where the student doesn’t have good critical-thinking and research skills and can be easily swayed by spin. It is more so with secondary education or some tertiary education situations like vocational courses or people at an early stage in the undergraduate studies.
How does astroturfing affect healthy democracies
All pillars of government can and do fall victim to astroturfing. This can happen at all levels of government ranging from local councils through state or regional governments to the national governments.
During an election, an astroturfing campaign can be used to steer opinion for or against a political party or candidate who is standing for election. In the case of a referendum, it can steer popular opinion towards or against the questions that are the subject of the referendum. This is done in a manner to convey the veneer of popular grassroots support for or against the candidate, party or issue.
The legislature is often a hotbed of political lobbying by interest groups and astroturfing can be used to create a veneer of popular support for or against legislation or regulation of concern to the interest group. As well, astroturfing can be used a a tool to place pressure on legislature members to advance or stall a proposed law and, in some cases, force a government out of power where there is a stalemate over that law.
The public-service agencies of the executive government who have the power to permit or veto activity are also victims of astroturfing. This comes in the form of whether a project can go ahead or not; or whether a product is licensed for sale within the jurisdiction. It can also affect the popular trust in any measures that officials in the executive government execute.
As well, the judiciary can be tasked with handling legal actions launched by pressure groups who use astroturfing to create a sense of popular support to revise legislation or regulation. It also includes how jurors are influenced in any jury trial or which judges are empanelled in a court of law, especially a powerful appellate court or the jurisdiction’s court of last resort.
Politicians, significant officials and key members of the judiciary can fall victim to character assassination campaigns that are part of one or more astroturfing campaigns. This can affect continual popular trust in these individuals and can even affect the ability for them to live or conduct their public business in safety.
Here, politicians and other significant government officials are increasingly becoming accessible to the populace. It is being facilitated by themselves maintaining Social-Web presence using a public-facing persona on the popular social-media platforms, with the same account-name or “handle” being used on the multiple platforms. In the same context, the various offices and departments maintain their social-Web presence on the popular platforms using office-wide accounts. This is in addition to other online presences like the ministerial Web pages or public-facing email addresses they or the government maintain.
These officials can be approached by interest groups who post to the official’s Social-Web presence. Or a reference can be created to the various officials and government entities through the use of hashtags or mentions of platform-native account names operated by these entities when someone creates a Social Web post about the official or issue at hand. In a lot of cases, there is reference to sympathetic journalists and media organisations in order to create media interest.
As well, one post with the right message and the right mix of hashtags and referenced account names can be viewed by the targeted decision makers and the populace at the same time. Then people who are sympathetic to that post’s message end up reposting that message, giving it more “heat”.
Here, the Social Web is seen as providing unregulated access to these powerful decision-makers. That is although the decision-makers work with personal assistants or similar staff to vet content that they see. As well, there isn’t any transparency about who is posting the content that references these officials i.e. you don’t know whether it is a local constituent or someone pressured by an interest group.
What can be done about it
The huge question here is what can be done about astroturfing as a means of disinformation.
A significant number of jurisdictions implement attribution requirements for any advertising or similar material as part of their fair-trading, election-oversight, broadcasting, unsolicited-advertising or similar laws. Similarly a significant number of jurisdictions implement lobbyist regulation in relationship to who has access to the jurisdiction’s politicians. As outlined in the RNZ article that I referred to, New Zealand is examining astroturfing in the context of whether they should regulate access to their politicians.
But most of these laws regulate what goes on within the offline space within the jurisdiction that they pertain to. It could become feasible for foreign actors to engage in astroturfing and similar campaigns from other territories across the globe using online means without any action being taken.
The issue of regulating lobbyist access to the jurisdiction’s politicians or significant officials can raise questions. Here it could be about whether the jurisdiction’s citizens have a continual right of access to their elected government or not. As well, there is the issue of assuring governmental transparency and a healthy dialogue with the citizens.
The well-oiled government and business machines have employees like secretaries and assistants who have the function of being gatekeepers for the officials that matter. Even significant and influential public figures engage these “gatekeeper” teams to protect their personality “brand” from being misused.
These “gatekeeper” teams also includes staff dedicated to monitoring traditional and social media coverage on the issues at hand. These employees, if they are astute enough, can alert the officials to disinformation campaigns or prevent these officials from being swayed by them.
The 2016 fake-news crisis which highlighted the distortion of the 2016 US Presidential Election and UK Brexit referendum became a wake-up call regarding how the online space can be managed to work against disinformation.
Here, Silicon Valley took on the task of managing online search engines, social-media platforms and online advertising networks to regulate foreign influence and assure accountability when it comes to political messaging in the online space. This included identity verification of advertiser accounts or keeping detailed historical records of ads from political advertisers on ad networks or social media or clamping down on coordinated inauthentic behaviour on social media platforms.
In addition to this, an increasingly-large army of “fact-checkers” organised by credible newsrooms, universities and similar organisations appeared. These groups researched and verified claims which were being published through the media or on online platforms and would state whether they are true or false based on their research.
What we can do is research further and trust our instincts when it comes to questionable claims that come from apparently-benign organisations. Here we can do our due diligence and check for things like how long an online account has been in operation for, especially if it is synchronous to particular political, regulatory or similar events occurring or being on the horizon.
Here you have to look out for behaviours in the online or offline content like:
Inflammatory or manipulative language that plays on your emotions
Claims to debunk topic-related myths that aren’t really myths
Questioning or pillorying those exposing the wrongdoings core to the argument rather the actual wrongdoings
A chorus of the same material from many accounts
We need to be aware of astroturfing as another form of disinformation that is prevalent in the online age. Here it can take in people who are naive and accept information at face value without doing further research on what is being pushed.
Previously, when I have talked about activities that social media companies have undertaken regarding misinformation during election cycles, including misinformation to suppress voter participation, I have covered what these companies in the private sector are doing.
But I have also wanted to see a healthy dialogue between the social-media private sector and public-sector agencies responsible for the security and integrity of the elections. This is whether they are an election-oversight authority like the FEC in the USA or the AEC in Australia; a broadcast oversight authority like the FCC in the USA or OFCOM in the UK; or a consumer-rights authority like the FTC in the USA or the ACCC in Australia. Here, these authorities need to be able to know where the proper communication of electoral information is at risk so they can take appropriate education and enforcement action regarding anything that distorts the election’s outcome.
Just lately, the US government arrested a Twitter troll who had been running information on his Twitter feed to dissuade Americans from participating properly and making their vote count in the 2016 Presidential Election. Here, the troll was suggesting that they don’t attend the local polling booths but cast their vote using SMS or social media, which isn’t considered a proper means of casting your vote in the USA. Twitter had banned him and a number of alt-right figureheads that year for harrassment.
These charges are based on a little-known US statute that proscribes activity that denies or dissuades a US citizen’s right to exercise their rights under that country’s Constitution. That includes the right to cast a legitimate vote at an election.
But this criminal case could be seen as a means to create a “conduit” between social media platforms and the public sector to use the full extent of the law to clamp down on disinformation and voter suppression using the Web. I also see it as a chance for public prosecutors to examine the laws of the land and use them as a tool to work against the fake news and disinformation scourge.
This is a criminal matter before the courts of law in the USA and the defendent is presumed innocent unless he is found guilty in a court of law.
I read in Gizmodo how an incendiary hashtag directed against Daniel Andrews, the State Premier of Victoria in Australia, was pushed around the Twittersphere and am raising this as an article. It is part of keeping HomeNetworking01.info readers aware about disinformation tactics as we increasingly rely on the Social Web for our news.
What is a hashtag
A hashtag is a single keyword preceded by a hash ( # ) symbol that is used to identify posts within the Social Web that feature a concept. It was initially introduced in Twitter as a way of indexing posts created on that platform and make them easy to search by concept. But an increasing number of other social-Web platforms have enabled the use of hashtags for the same purpose. They are typically used to embody a slogan or idea in an easy-to-remember way across the social Web.
Most social-media platforms turn these hashtags in to a hyperlink that shows a filtered view of all posts featuring that hashtag. They even use statistical calculations to identify the most popular hashtags on that platform or the ones whose visibility is increasing and present this in meaningful ways like ranked lists or keyword clouds.
How this came about
Earlier on in the COVID-19 coronavirus pandemic, an earlier hashtag called #ChinaLiedPeopleDied was working the Social Web. This was underscoring a concept with a very little modicum of truth that the Chinese government didn’t come clear about the genesis of the COVID-19 plague with its worldwide death toll and their role in informing the world about it.
That hashtag was used to fuel Sinophobia hatred against the Chinese community and was one of the first symptoms of questionable information floating around the Social Web regarding COVID-19 issues.
Australia passed through the early months of the COVID-19 plague and one of their border-control measures for this disease was to have incoming travellers required to stay in particular hotels for a fortnight before they can roam around Australia as a quarantine measure. The Australian federal government put this program in the hands of the state governments but offered resources like the use of the military to these governments as part of its implementation.
The second wave of the COVID-19 virus was happening within Victoria and a significant number of the cases was to do with some of the hotels associated with the hotel quarantine program. This caused a very significant death toll and had the state government run it to a raft of very stringent lockdown measures.
A new hashtag called #DanLiedPeopleDied came about because it was deemed that the Premier, Daniel Andrews, as the head of the state’s executive government wasn’t perceived to have come clear about any and all bungles associated with its management of the hotel quarantine program.
On 14 July 2020, this hashtag first appeared in a Twitter account that initially touched on Egyptian politics and delivered its posts in the Arabic language. But it suddenly switched countries, languages and political topics, which is one of the symptoms of a Social Web account existing just to peddle disinformation and propaganda.
The hashtag had laid low until 12 August when a run of Twitter posts featuring it were delivered by hyper-partisan Twitter accounts. This effort, also underscored by newly-created or suspicious accounts that existed to bolster the messaging, was to make it register on Twitter’s systems as a “trending” hashtag.
Subsequently a far-right social-media influencer with a following of 116,000 Twitter accounts ran a post to keep the hashtag going. There was a lot of very low-quality traffic featuring that hashtag or its messaging. It also included a lot of low-effort memes being published to drive the hashtag.
The above-mentioned Gizmodo article has graphs to show how the hashtag appeared over time which is worth having a look at.
What were the main drivers
But a lot of the traffic highlighted in the article was driven by the use of new or inauthentic accounts which aren’t necessarily “bots” – machine operated accounts that provide programmatic responses or posts. Rather this is the handiwork of trolls or sockpuppets (multiple online personas that are perceived to be different but say the same thing).
As well, there was a significant amount of “gaming the algorithm” activity going on in order to raise the profile of that hashtag. This is due to most social-media services implementing algorithms to expose trending activity and populate the user’s main view.
Why this is happening
Like with other fake-news, disinformation and propaganda campaigns, the #DanLiedPeopleDied hashtag is an effort to sow seeds of fear, uncertainty and doubt while bringing about discord with information that has very little in the way of truth. As well the main goal is to cause a popular distrust in leadership figures and entities as well as their advice and efforts.
In this case, the campaign was targeted at us Victorians who were facing social and economic instability associated with the recent stay-at-home orders thanks to COVID-19’s intense reappearance, in order to have us distrust Premier Dan Andrews and the State Government even more. As such, it is an effort to run these kind of campaigns to people who are in a state of vulnerability, when they are less likely to use defences like critical thought to protect themselves against questionable information.
As I know, Australia is rated as one of the most sustainable countries in the world by the Fragile States Index, in the same league as the Nordic countries, Switzerland, Canada and New Zealand. It means that the country is known to be socially, politically and economically stable. But we can find that a targeted information-weaponisation campaign can be used to destabilise a country even further and we need to be sensitive to such tactics.
One of the key factors behind the problem of information weaponisation is the weakening of traditional media’s role in the dissemination of hard news. This includes younger people preferring to go to online resources, especially the Social Web, portals or news aggregator Websites for their daily news intake. It also includes many established newsrooms receiving reduced funding thanks to reduced advertising, subscription or government income, reducing their ability to pay staff to turn out good-quality news.
When we make use of social media, we need to develop a healthy suspicion regarding what is appearing. Beware of accounts that suddenly appear or develop chameleon behaviours especially when key political events occur around the world. Also be careful about accounts that “spam” their output with a controversial hashtag or adopt a “stuck record” mentality over a topic.
Any time where a jurisdiction is in a state of turmoil is where the Web, especially the Social Web, can be a tool of information warfare. When you use it, you need to be on your guard about what you share or which posts you interact with.
Here, do research on hashtags that are suddenly trending around a social-media platform and play on your emotions and be especially careful of new or inauthentic accounts that run these hashtags.
They have improved on their previous efforts regarding this kind of traffic initially by using a “double-arrow” icon on the left of messages that have been forwarded five or more times.
But now they are trialling an option to allow users to Google the contents of a forwarded message to check their veracity. One of the ways to check a news item’s veracity is whether one or more news publishers or broadcasters that you trust are covering this story and what kind of light they are shining on it.
Here, the function manifests as a magnifying-glass icon that conditionally appears near forwarded messages. If you click or tap on this icon, you start a browser session that shows the results of a pre-constructed Google-search Weblink created by WhatsApp. It avoids the need to copy then paste the contents of a forwarded message from WhatsApp to your favourite browser running your favourite search engine or to the Google app’s search box. This is something that can be very difficult with mobile devices.
But does this function break end-to-end encryption that WhatsApp implements for the conversations? No, because it works on the cleartext that you see on your screen and is simply creating the specially-crafted Google-search Weblink that is passed to whatever software handles Weblinks by default.
An initial pilot run is being made available in Italy, Brazil, Ireland (Eire), UK, Mexico, Spain and the USA. It will be part of the iOS and Android native clients and the messaging service’s Web client.
WhatsApp could evolve this function further by allowing the user to use different search engines like Bing or DuckDuckGo. But they would have to know of any platform-specific syntax requirements for each of these platforms and it may be a feature that would have to be rolled out in a piecemeal fashion.
They could offer the “search the Web” function as something that can be done for any message, rather than only for forwarded messages. I see it as being relevant for people who use the group-chatting functionality that WhatsApp offers because people can use a group chat as a place to post that rant that has a link to a Web resource of question. Or you may have a relative or friend who simply posts questionable information as part of their conversation with you.
At least WhatsApp are adding features to their chat platform’s client software to make it easer to put the brakes on disinformation spreading through it. This could he something that could be investigated by other instant-messaging platforms including SMS/MMS text clients.
The Trusted New Initiative are a recently formed group of global news and tech organisations, mostly household names in these fields, who are working together to stop the spread of disinformation where it poses a risk of real-world harm. It also includes flagging misinformation that undermines trust the the TNI’s partner news providers like the BBC. Here, the online platforms can review the content that comes in, perhaps red-flagging questionable content, and newsrooms avoid blindly republishing it.
.. as well as their online presence – they will benefit from having their imagery authenticated by a TNI watermark
One of their efforts is to agree on and establish an early-warning system to combat the spread of fake news and disinformation. It is being established in the months leading up to the polling day for the US Presidential Election 2020 and is flagging disinformation were there is an immediate threat to life or election integrity.
It is based on efforts to tackle disinformation associated with the 2019 UK general election, the Taiwan 2020 general election, and the COVID-19 coronavirus plague.
Another tactic is Project Origin, which this article is primarily about.
An issue often associated with fake news and disinformation is the use of imagery and graphics to make the news look credible and from a trusted source.
Typically this involves altered or synthesised images and vision that is overlaid with the logos and other trade dress associated with BBC, CNN or another newsroom of respect. This conveys to people who view this online or on TV that the news is for real and is from a respected source.
Project Origin is about creating a watermark for imagery and vision that comes from a particular authentic content creator. This will degrade whenever the content is manipulated. It will be based around open standards overseen by TNI that relate to authenticating visual content thus avoiding the need to reinvent the wheel when it comes to developing any software for this to work.
One question I would have is whether it is only readable by computer equipment or if there is a human-visible element like the so-called logo “bug” that appears in the corner of video content you see on TV. If this is machine-readable only, will there be the ability for a news publisher or broadcaster to overlay a graphic or message that states the authenticity at the point of publication. Similarly, would a Web browser or native client for an online service have extra logic to indicate the authenticity of an image or video footage.
I would also like to see the ability to indicate the date of the actual image or footage being part of the watermark. This is because some fake news tends to be corroborated with older lookalike imagery like crowd footage from a similar but prior event to convince the viewer. Some of us may also look at the idea of embedding the actual or approximate location of the image or footage in the watermark.
There is also the issue of newsrooms importing images and footage from other sources whose equipment they don’t have control over. For example, an increasing amount of amateur and videosurveillance imagery is used in the news usually because the amateur photographer or the videosurveillance setup has the “first images” of the news event. Then there is reliance on stock-image libraries and image archives for extra or historical footage; along with newsrooms and news / PR agencies sharing imagery with each other. Let’s not forget media companies who engage “stringers” (freelance photographers and videographers) who supply images and vision taken with their own equipment.
The question with all this, especially with amateur / videosurveillance / stringer footage taken with equipment that media organisations don’t have control over is how such imagery can be authenticated by a newsroom. This is more so where the image just came off a source like someone’s smartphone or the DVR equipment within a premises’ security room. There is also the factor that one source could tender the same imagery to multiple media outlets, whether through a media-relations team or simply offering it around.
At least Project Origin will be useful as a method to allow the audience to know the authenticity and provenance of imagery that is purported to corroborate a newsworthy event.
Regularly, I cover on HomeNetworking01.info the issue of fake news and disinformation. This is because of our consumption of news and information being part of our online lives thanks to the ubiquity and affordability of the Intermet.
I have highlighted the use of online sources like social media, Web portals, search engines or news aggregators as our regular news sources along with the fact that it is very easy to spread rumour and disinformation around the world thanks to the ease of publishing the Web provides. As well, it is easy for our contacts to spread links to Web resources or iterate messages in these resources via the Social Web, emails or instant-messaging platforms.
This issue has become of concern since 2016 when fake news and disinformation circulating around the Web was used to distort the outcome of the UK’s Brexit referendum and the US election that brought Donald Trump in to the presidency of that country.
Since then, I have covered efforts by the tech industry and others to make us aware of fake news, disinformation and propaganda such as through the use of fact-checkers or online services implementing robust data-security and account-management policies and procedures. It also includes articles that encourage the use of good-quality traditional media sources during critical times like national elections or the coronavirus and I even see the issue of being armed against fake news and disinformation as part of data security.
The ABC have run a video series as part of their “Behind The News” schools-focused TV show about the media which underscores the value of media literacy and discerning the calibre of news that is being presented. On Tuesday 6 July 2020, I watched “The Drum” and one of the people behind this series described it as being highly relevant viewing for everyone no matter how old we are thanks to the issue of fake news and disinformation being spread around the Web.
It is part of their continued media-literacy efforts like their “Media Watch” TV series run on Monday nights which highlights and keeps us aware of media trends and issues.
In that same show, they even recommended that if we do post something that critiques a piece of fake news or disinformation, we were to reference the material with a screenshot rather than sharing a link to the content. This is because interactions, link shares and the like are often used as a way to “game” social-network and search-engine algorithms, making it easier to discover the questionable material.
The first video looked at how and why fake news has been spread over the ages such as to drive newspaper sales or listenership and viewership of broadcasts. It also touched on how such news is spread including taking advantage of “thought and social bubbles” that we establish. As well, one of the key issues that was highlighted was fact that fake news tends to be out of touch with reality and to encourage us to research further about the article and who is behind it before taking it as gospel and sharing it further.
This second video of the series that touches on the quality of news and information sources that can be used to drive a news story. It examines the difference between the primary sources that provide first-had information “from the horse’s mouth” and secondary sources that evaluate or interpret the information.
It also touches on whether the news source is relying primarily on secondary sources or hearsay vs expert or authoritative testimony. They raise the legitimacy of contrasting opinion like academic debate or where there isn’t enough real knowledge on the topic. But we were warned about news sources that are opinion-dominant rather than fact-dominant. Even issues like false equivalence, bias or use of anonymous sources were identified along with the reason behind the source presenting the information to the journalist or newsroom.
This video even summed up how we assess news sources by emphasising the CRAP rule – Currency (how recent the news is), Reliability (primary vs secondary source as well as reliability of the source), Authority (is the source authoritative on the topic) and Purpose (why was the news shared such as biases or fact vs opinion).
The third video in the series talks about what makes information newsworthy. This depends on who is reporting it and the consumers who will benefit from the information. It also covered news value like timeliness, frequency of occurrence, cultural proximity, the existence of people in the public spotlight along with factors like conflict or the tone of the story. It completely underscored why and how you are told of information that could shape your view of the world.
The fourth video looks at bias within the media and why it is there. The reasons that were called out include to influence the way we think or vote or what goods or services we buy. It also includes keeping the media platform’s sponsors or commercial partners in a positive light, a media platform building an increasingly-large army of loyal consumers, or simply to pander to their audience’s extant biases.
It also looked at personal biases that affect what we consume and how we consume it, including the “I-am-right” confirmation bias, also known as the “rose-tinted glasses” concept. Even looking at how people or ideas are represented on a media platform, what kind of stories appear in that platform’s output including what gets top place; along with how the stories are told with both pictures and words can highlight potential biases. There was also the fact that a personal bias can be influenced to determine what we think of a media outlet.
The last of the videos looks at honesty, accuracy and ethics within the realm of media. It underscores key values like honest, accurate, fair, independent and respectful reporting along with various checks and balances that the media is subject to. Examples of these include the protections that the law of the land offers like the tort of defamation, contempt of court and similar crimes that protect the proper role of the courts of justice, hatred-of-minority-group offences and right-to-privacy offences. There is also the oversight offered by entities like broadcast-standards authorities and press councils who have effective clout.
The legal references that were highlighted were primarily based on what happens within Australia while British viewers may see something very similar there due to implied freedom of speech along with similarly-rigourous defamation laws. The USA may take slightly different approaches especially where they rely on the First Amendment of their Constitution that grants an express freedom of speech.
But it sums up the role of media as a check on the powerful and its power to shine a light on what needs to be focused on. The series also looks at how we can’t take the media for granted and need to be aware of the way the news appears on whatever media platform we use. This is although there is primary focus on traditional media but in some ways it can also be a way to encourage us to critically assess online media resources.
The video series underscores what the news media is about and covers this issue in a platform-agnostic manner so we don’t consider a particular media platform or type as a purveyor of fake or questionable news. As well, the series presents the concepts in a simple-to-understand manner but with the use of dramatisation in order to grab and keep our attention.
Here, I often wonder whether other public-service or community broadcasters are running a similar media-literacy video program that can be pitched at all age levels in order to encourage the communities they reach to be astute about the media they come across.
Increasingly, images and video are being seen as integral to news coverage with most of us seeing them, especially photographs, of importance when corroborating a fact or news story.
But these are becoming weaponised to tell a different truth compared to what is actually captured by the camera. One way is to use the same or a similar image to corroborate a different fact, with this including the use of image-editing tools to doctor the image so it tells a different story.
I have covered this previously when talking about the use of reverse-image-search tools like Tineye or Google Image Search to verify the authenticity of an image and . It will be the same kind of feature that Google has enabled in its search interface when you “google” for something, or in its news-aggregation platforms.
Google is taking this further for people who search for images using their search tools. Here, they are adding images to their fact-check processes so it is easy to see whether an image has been used to corroborate questionable information. You will see a “fact-check” indicator near the image thumbnail and when you click or tap on the image for a larger view or more details, you will see some details about whether the image is true or not.
A similar feature appears on the YouTube platform for exhibiting details about the veracity of video content posted there. But this feature currently is available to users based in Brazil, India and the USA and I am not sure whether it will be available across all YouTube user interfaces, especially native clients for mobile and set-top platforms.
It is in addition to Alphabet, their parent company, offering a free tool to check whether an image has been doctored. This is because meddling with an image to constitute something else using something like Adobe Photoshop or GIMP is being seen as a way to convey a message that isn’t true. The tool, called Assembler, uses artificial intelligence and algorithms that detect particular forms of image manipulation to indicate the veracity of an image.
But I would also see the rise of tools that analyse audio and video material to identify deepfake activity, or video sites, podcast directories and the like using a range of tools to identify the authenticity of content made available through them. This may include “fact-check” labels with facts being verified by multiple newsrooms and universities; or the content checked for out-of-the-ordinary editing techniques. It can also include these sites and directories implementing a feedback loop so that users can have questionable content verified.
The coronavirus plague is having us at home, inside and online more…. (iStock by Getty Images)
The Covid-19 coronavirus plague is changing our habits more and more as we stay at home to avoid the virus or avoid spreading it onwards. Now we are strongly relying on our home networks and the Internet to perform our work, continue studying and connect with others in our social circles.
But this state of affairs is drawing out its own cyber-security risks, with computing devices being vulnerable to malware and the existence of hastily-written software being preferred of tasks like videoconferencing. Not to mention the risk of an increasing flow of fake news and disinformation about this disease.
What can we do?
General IT security
But we need to be extra vigilant about our data security and personal privacy
The general IT security measures are very important even in this coronavirus age. Here, you need to make sure that all the software on your computing devices, including their operating systems are up-to-date and have the latest patches. It also applies to your network, TV set-top and Internet-of-Things hardware where you need to make sure the firmware is up-to-date. The best way to achieve this is to have the devices automatically download and install the revised software themselves.
As well, managing the passwords for our online services and our devices properly prevents the risk of data and identity theft. It may even be a good idea to use a password vault program to manage our passwords which may prevent us from reusing them across services. Similarly using a word processor to keep a list of your passwords which is saved on removeable media and printed out, with both the hard and electronic copy kept in a secure location may also work wonders here.
Make sure that your computer is running a desktop / endpoint security program, even if it is the one that is part of the operating system. Similarly, using an on-demand scanning tool like Malwarebytes can work as a way to check for questionable software. As well, you may have to check the software that is installed on all of the computing devices is what you are using and even verify with multiple knowledgeable people if that program that is the “talk of the town” should be on your computer.
If you are signing up with new online services, it may even be a better idea to implement social sign-on with established credential pools like Google, Facebook or Microsoft. These setups implement a token between the credential pool and the online service as the authentication factor rather than a separate username and password that you create.
As well, you will be using the Webcam more frequently on your computing devices. The security issue with the Webcam and microphone is more important with computing setups that have the Webcam integrated in the computer or monitor, like with portable computing devices, “all-in-one” computers or monitors equipped with Webcams.
Here, you need to be careful of which programs are having access to the Webcam and microphone on your device. Here, if newly-installed software asks for use of your camera or microphone and it is out of touch with the way the software works, deny access to the camera or microphone when it asks for their use.
If you install a health-department-supplied tracking app as part of your government’s contact-tracing and disease-management efforts, remember to remove this app as soon as the coronavirus crisis is over. Doing this will protect your privacy once there is no real need to manage the disease.
Email and messaging security
Your email and messaging platforms will become an increased security risk at this time thanks to phishing including business email compromise. I have covered this issue in a previous article after helping someone reclaim their email service account after a successful phishing attempt.
An email or message would be a phishing attempt if the email isn’t commensurate with proper business writing standards for your country, has a sense of urgency about it and is too good to be true. Once you receive these emails, it is prudent to report them then delete them forthwith.
In the case of email addresses from official organisations, make sure that the domain name represents the organisation’s proper domain name. This is something that is exactly like the domain name they would use for their Web presence, although email addresses may have the domain name part of the address following the “ @ “ symbol prepended with a server identifier like “mail” or “email”. As well, there should be nothing appended to the domain name.
Also, be familiar with particular domain-name structures for official organisation clusters like the civil / public service, international organisations and academia when you open email or surf the Web. These will typically use protected high-level domain name suffixes like “.gov”, “.int” or “.edu” and won’t use common domain name suffixes like “ .com “. This will help with identifying whether a site or a sender is the proper authority or not.
Messaging and video-conferencing
Increasingly as we stay home due to the risk of catching or spreading the coronavirus plague, we are relying on messaging and video-conferencing software more frequently to communicate with each other. For example, families and communities are using video-conferencing software like Zoom or Skype to make a virtual “get-together” with each other thanks to these platforms’ support for many-to-many videocalls.
But as we rely on this software more, we need to make sure that our privacy, business confidentiality and data security is protected. This is becoming more important as we engage with our doctors, whether they be general practitioners or specialists, “over the wire” and reveal our medical issues to them that way.
If you value privacy, look towards using an online communications platform that implements end-to-end encryption. Infact, most of the respected “over-the-top” communications platforms like WhatsApp, Viber, Skype and iMessage offer this feature for 1:1 conversations between users on the same platform. Some, like WhatsApp and Viber offer this same feature for group conversations between users on that same platform.
Video-conferencing software like Zoom and Skype
When you are hosting a video-conference using Zoom, Skype or similar platforms, be familiar with any meeting-setup and meeting-management features that the platform offers. If the platform uses a Weblink to join a video-conference that you can share, use email or a messaging platform to share that link with potential participants. Avoid posting this on the Social Web so you keep gatecrashers from your meeting or class.
As well, if the platform supports password-protected meeting entry, use this feature to limit who can join the meeting. Here, it is also a good idea to send the password as a separate message from the meeting’s Weblink.
Some platforms like Zoom offer a waiting-room function which requires potential participants to wait and be vetted by the conference’s moderator before they can participate. As well these platforms may have a meeting-lockout so no more people can participate in the video-conference. Here, you use this function when all the participants that you expect are present in the meeting.
You need to regulate the screen sharing feature that your platform offers which allows meeting participants to share currently-running app or desktop user interfaces. Here, you may have the ability to limit this function to the moderator’s computer or a specified participant’s computer. Here this will prevent people from showing offensive imagery or videos to all the meeting’s participants. As well, you may also need to regulate access to any file-sharing functionality that the platform offers in order to prevent the video conference becoming a vector for spreading malware or offensive material.
Here, I would recommend that you use trusted news sources like the respected public-service broadcasters for information about this plague. As well, I would recommend that you visit respected health-information sites including those offered “from the horse’s mouth” by local, regional or national government agencies for the latest information.
As well, trust your “gut reaction” when it comes to material that is posted online about the coronavirus plague, including the availability of necessary food or medical supplies. Here, he careful of content that is “out of reality” or plays on your emotions. The same attitude should also apply when it comes to buying essential supplies online and you are concerned about the availability and price of these supplies.
As we spend more time indoors and online thanks to the coronavirus, we need to keep our computing equipment including our tablets and smartphones running securely to protect our data and our privacy.
Increasingly, most of us who regularly interact with the Internet will be encouraged to perform reverse-image searches.
This is where you use an image you supply or reference as a search term for the same or similar images on other Internet resources. It can also be about identifying a person or other object that is in the image.
Increasingly this is being used by people who engage in online dating to verify the authenticity of the person whom they “hit” on in an online-dating or social-media platform. It is due to romance scams where “catfishing” (pretending to be someone else in order to attract people of a particular kind) is part of the game. Here, part of the modus operandi is for the perpetrator to steal pictures of other people that match a particular look from photo-sharing or social-media sites and use these images in their profile.
A regular computer like this Dell Inspiron 14 5000 2-in-1 makes it easier to do a reverse image search thanks to established operating system and browser code and its user interface.
The process of using these services involves you uploading the image to the service including using “copy-and-paste” techniques or passing the image’s URL to an address box in the search engine’s user interface. The latter method implies a “search-by-reference” method with the reverse-image-search site loading the image associated with that link into itself as its search term.
Using a regular desktop or laptop computer that runs the common desktop operating systems makes this job easier. This is because the browsers offered on these platforms implement tabs or allow multiple sessions so you can run the site in question in one tab or window and one or two reverse-image-search engines in other tabs or windows.
These operating systems also maintain well-developed file systems and copy-paste transfer algorithms that facilitate the transfer of URLs or image data to these reverse-image-search engines. That will also apply if you are dealing with a native app for that online service such as the client app offered by Facebook or LinkedIn for Windows. As well, Chrome and Firefox provide drag-and-drop support so you can drag the image from that Tinder or Facebook profile in one browser session to Tineye running in the other browser session.
But mobile users may find this process very daunting. Typically it requires the site to be opened and logged in to in Chrome or Safari then opened as a desktop version which is the equivalent of viewing it on a regular computer. For Chrome, you have to tap on the three-dot menu and select “Request Desktop Site”. For Safari, you have to tap the upward-facing arrow to show the “desktop view” option and select that option.
Then you open the image in a new tab and copy the image’s URL from the address bar. That is before you visit Google Image Search or Tineye to paste the URL in that app’s interface.
Google has built in to recent mobile versions of Chrome a shortcut to their reverse-image-search function. Here, you “dwell” on the image with your finger to expose a pop-up menu which has the “Search Google For This Image” option. The Bing app has the ability for you to upload images or screenshots for searching.
Share option in Google Chrome on Android
If you use an app like the Facebook, Instagram or Tinder mobile clients, you may have to take a screenshot of the image you want to search on. Recent iOS and Android versions also provide the ability to edit a screenshot before you save it thus cutting out the unnecessary user-interface stuff from what you want to submit. Then you open up Tineye or Google Image Search in your browser and upload the image to the reverse-image-search engine.
How can reverse image searching on the mobile platforms be improved
What can be done to facilitate reverse image searching on the mobile platforms is for reverse-image-search engines to create lightweight apps for each mobile platform. This app would make use of the mobile platform’s “Share” function for you to upload the image or its URL to the reverse-image-search engine as a search term. Then the app would show you the results of your search through a native interface or a view of the appropriate Web interface.
A reverse-image-search tool like Tineye could be a share-to destination for mobile platforms like iOS or Android
Why have this app work as a “share to” destination? This is because most mobile-platform apps and Web browsers make use of the “share to” function as a way to take a local or online resource further. It doesn’t matter whether it is to send to someone else via a messaging platform including email; obtain a printout or, in some cases, stream it on the big screen via AirPlay or Chromecast.
The lightweight mobile app that works with a reverse-image-search engine answers the reality that most of us use smartphones or mobile-platform tablets for personal online activity. This is more so with social media, online dating and online news sources, thanks to the “personal” size of these devices.
What is becoming real is reverse image searching, whether of particular images or Webpages, is being seen as important for our security and privacy and for our society’s stability.
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.