Tag: fake news

What is prebunking in the context of news accuracy?

Facebook login page

Prebunking is used to rebut potential disinformation campaigns on social media

As you hear about how different entities are managing fake news and disinformation, you may hear of “prebunking” in the war against these disinformation campaigns. Here, it is about making sure the correct and accurate information gets out first before falsehoods gain traction.

Fact-checkers operated by newsrooms and universities and engaged by media outlets and respectable online services typically analyse news that comes their way to see if it is factual and truthful. One of the primary tasks they do is to “debunk” or discredit what they have found to be falsehoods by publishing information about rebuffs the falsehood.

But the war against fake news and disinformation is also taking another approach by dealing with potential disinformation and propaganda in a pre-emptive manner.

Here, various organisations like newsrooms, universities or government agencies will anticipate and publish a line of disinformation that is likely to be published while concurrently publishing material that refutes the anticipated falsehood. It may also be about rebutting possible information manipulation or distortion of facts by publishing material that carries the accurate information.

This process is referred to as “prebunking” rather than debunking because of it forewarning the general public about possible falsehoods or information manipulation. It is also couched in terms analogous to inoculation or vaccination because a medical vaccine like one of those COVID jabs establishes a defence in your body against a pending infection thus making it hard for that infection to take hold in your body.

Prebunking is seen as a “heads-up” alert to a potential disinformation campaign so we can be aware of and take action against it. One way of describing this as prebunking as the “guardrail at the top of the cliff” and debunking as the ”ambulance at the bottom of the cliff”. These efforts are also a way to sensitise us to the techniques used to have us believe distorted messaging and disinformation campaigns by highlighting fearmongering, scapegoating and pandering to our base instincts, emotions and biases.

Prebunking efforts are typically delivered as public-service announcements or posts that are run on Social Web platforms by government entities, advocacy organisations or similar groups. Other media platform like television or radio public-service announcements can be used to present prebunking information. Where a post or announcement leads to any online resources, this will be in the form of a simple-language landing page that even provides a FAQ (frequently-asked questions) article about the topic and the falsehoods associated with it. Examples of this in Australia are the state and federal election authorities who have been running posts in social media platforms to debunk American-style voter-suppression disinformation that surfaces around Australian elections.

Such campaigns are in response to the disinformation risks that are presented by the 24-hour news cycle and the Social Web. In a lot of cases, these campaigns are activated during a season of high disinformation risk like an election or referendum. Sometimes a war in another part of the world may be the reason to instigate a prebunking campaign because this is where the belligerent states will activate their propaganda machines to make themselves look good in the eyes of the world.

But the various media-literacy efforts ran by public libraries, educational institutions, public-service broadcasters and the like are also prebunking efforts in their own right. For example the ABC’s “Media Watch” exposes where traditional and social media are at risk of information manipulation or spreading disinformation with this show, for example, highlighting tropes used by media organisations to manipulate readers or viewers. Or the ABC running a “Behind The News” video series during 2020 about media literacy in the era of fake news and disinformation with “The Drum” cross-promoting it as something for everyone to see. A similar video-lecture series and resource page has also been made available by the University of Washington on this topic.

What prebunking is all about is to disseminate correct and accurate information relevant to an issue where there is a strong likelihood of misinformation or disinformation in order to have people aware of the proper facts and what is likely to go around.

Australian Electoral Commission takes to Twitter to rebut election disinformation

Articles Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Don’t Spread Disinformation on Twitter or the AEC Will Roast You (gizmodo.com.au)

Federal Election 2022: How the AEC Is Preparing to Combat Misinformation (gizmodo.com.au)

From the horse’s mouth

Australian Electoral Commission

AEC launches disinformation register ahead of 2022 poll (Press Release)

Previous coverage on HomeNetworking01.info

Being cautious about fake news and misinformation in Australia

My Comments

This next 18 months is to be very significant as far as general elections in Australia go. It is due to a Federal election and state elections in the most populous states taking place during that time period i.e. this year has the Federal election having to take place by May and the Victorian state election taking place by November, then the New South Wales state election taking place by March 2023.

Democracy sausages prepared at election day sausage sizzle


Two chances over the next 18 months to benefit from the democracy sausage as you cast your vote
Kerry Raymond, CC BY 4.0 <https://creativecommons.org/licenses/by/4.0>, via Wikimedia Commons

Oh yeah, more chances to eat those democracy sausages available at that school’s sausage sizzle after you cast that vote. But the campaign machine has started up early this year at the Federal level with United Australia Party ads appearing on commercial TV since the Winter Olympics, yard signs from various political parties appearing in my local neigbbourhood and an independent candidate for the Kooyong electorate running ads online through Google AdSense with some of those ads appearing on HomeNetworking01.info. This is even before the Governor General had served the necessary writs to dissolve the Federal Parliament and commence the election cycle.

Ged Kearney ALP candidate yard sign

The campaigns are underway even before the election is called

This season will be coloured with the COVID coronavirus plague and the associated vaccination campaigns, lockdowns and other public-health measures used to mitigate this virus. This will exacerbate Trump-style disinformation campaigns affecting the Australian electoral process, especially from anti-vaccination / anti-public-health-measure groups.

COVID will also exacerbate issues regarding access to the vote in a safe manner. This includes dealing with people who are isolated or quarantined due to them or their household members being struck down by the disease or allowing people on the testing and vaccination front lines to cast their vote. Or it may be about running the polling booths in a manner that is COVID-safe and assures the proper secret ballot.

There is also the recent flooding that is taking place in Queensland and NSW with it bringing about questions regarding access to the vote for affected communities’ and volunteers helping those communities. All these situations would depend on people knowing where and how to cast “convenience votes” like early or postal votes, or knowing where the nearest polling booth is especially with the flooding situation rendering the usual booths in affected areas out of action.

The Australian Electoral Commission who oversees elections at a federal level have established a register to record fake-news and disinformation campaigns that appear online to target Australians. They will also appear at least on Twitter to debunk disinformation that is swirling around on that platform and using common hashtags associated with Australian politics and elections.

Add to this a stronger wider “Stop And Consider” campaign to encourage us to be mindful about what we see, hear or read regarding the election. This is based on their original campaign ran during the 2019 Federal election to encourage us to be careful about what we share online. Here, that was driven by that Federal election being the first of its kind since we became aware of online fake-news and disinformation campaigns and their power to manipulate the vote.

There will be a stronger liasion with the AEC and the online services in relation to sharing intelligence about disinformation campaigns.

But the elephant in the room regarding election safety is IT security and cyber safety for a significant number of IT systems that would see a significant amount of election-related data being created or modified through this season.

Service Victoria contact-tracing QR code sign at Fairfield Primary School

Even the QR-code contact-tracing platforms used by state governments as part of their COVID management efforts have to be considered as far as IT security for an election is concerned – like this one at a school that is likely to be a polling place

This doesn’t just relate to the electoral oversight bodies but any government, media or civil-society setup in place during the election.

That would encompass things ranging from State governments wanting to head towards fully-electronic voter registration and electoral-roll mark-off processes, through the politicians and political parties’ IT that they use for their business process, the state-government QR-code contact tracing platforms regularly used by participants during this COVID-driven era, to the IT operated by the media and journalists themselves to report the election. Here, it’s about the safety of the participants in the election process, the integrity of the election process and the ability for voters to make a proper and conscious choice when they cast their vote.

Such systems have a significant number of risks associated with their data such as cyber attacks intended to interfere with or exfiltrate data or slow down the performance of these systems. It is more so where the perpetrators of this activity extends to adverse nation states or organised crime anywhere in the world. As well, interference with these IT systems is used as a way to create and disseminate fake news, disinformation and propaganda.

But the key issue regarding Australia’s elections being safe from disinformation and election interference is for us to be media-savvy. That includes being aware of material that plays on your emotions; being aware of bias in media and other campaigns; knowing where sources of good-quality and trustworthy news are; and placing importance on honesty, accuracy and ethics in the media.

Here, it may be a good chance to look at the “Behind The News” media-literacy TV series the ABC produced during 2020 regarding the issue of fake news and disinformation.  Sometimes you may also find that established media, especially the ABC and SBS or the good-quality newspapers may be the way to go for reliable election information. Even looking at official media releases “from the horse’s mouth” at government or political-party Websites may work as a means to identify exaggeration that may be taking place.

Having the various stakeholders encourage media literacy and disinformation awareness, along with government and other entities taking a strong stance with cyber security can be a way to protect this election season.

YouTube to examine further ways to control misinformation

Article

YouTube recommendation list

YouTube to further crack down on misinformation using warning screens and other strategies

YouTube Eyes New Ways to Stop Misinformation From Spreading Beyond Its Reach – CNET

From the horse’s mouth

YouTube

Inside Responsibility: What’s next on our misinfo efforts (Blog Post)

My Comments

YouTube’s part in controlling the spread of repeated disinformation has been found to be very limited in some ways.

This was focused on managing accounts and channels (collections of YouTube videos submitted by a YouTube account holder and curated by that holder) in a robust manner like implementing three-strikes policies when repeated disinformation occurs. It extended to managing the content recommendation engine in order to effectively “bury” that kind of content from end-users’ default views.

But new other issues have come up in relation to this topic. One of these is to continually train the artificial-intelligence / machine-learning subsystems associated with how YouTube operates with new data that represents newer situations. This includes the use of different keywords and different languages.

Another approach that will fly in the face of disinformation purveyors is to point end-users to authoritative resources relating to the topic at hand. This will typically manifest in lists of hyperlinks to text and video resources from sources of respect when there is a video or channel that has questionable material.

But a new topic or new angle on an existing topic can yield a data-void where there is scant or no information on the topic from respectable resources. This can happen when there is a fast-moving news event and is fed by the 24-hour news cycle.

Another issue is where someone creates a hyperlink to or embeds a YouTube video in their online presence. This is a common way to put YouTube video content “on the map” and can cause a video to go viral by acquiring many views. In some cases like “communications-first” messaging platforms such as SMS/MMS or instant-messaging, a preview image of the video will appear next to a message that has a link to that video.

Initially YouTube looked at the idea of preventing a questionable resource from being shared through the platform’s user interface. But questions were raised about this including limiting a viewer’s freedoms regarding taking the content further.

The issue that wasn’t even raised is the fact that the video can be shared without going via YouTube’s user interface. This can be through other means like copying the URL in the address bar if viewing on a regular computer or invoking the “share” intent on modern desktop and mobile operating systems to facilitate taking it further. In some operating systems, that can extend to printing out material or “throwing” image or video material to the large screen TV using a platform like Apple TV or Chromecast. Add to this the fact that a user will want to share the video with others as part of academic research or news report.

Another approach YouTube is looking at is based on an age-old approach implemented by responsible TV broadcasters or by YouTube with violent age-restricted or other questionable content. That is to show a warning screen, sometimes accompanied with an audio announcement, before the questionable content plays. Most video-on-demand services will implement an interactive approach at least in their “lean-forward” user interfaces where the viewer has to assent to the warning before they see any of that content.

In this case, YouTube would run a warning screen regarding the existence of disinformation in the video content before the content plays. Such an approach would make us aware of the situation and act as a “speed bump” against continual consumption of that content or following through on hyperlinks to such content.

Another issue YouTube is working on is keeping its anti-disinformation efforts culturally relevant. This scopes in various nations’ historical and political contexts, whether a news or information source is an authoritative independent source or simply a propaganda machine, fact-checking requirements, linguistic issues amongst other things. The historical and political issue could include conflicts that had peppered the nation’s or culture’s history or how the nation changed governments.

Having support for relevance to various different cultures provides YouTube’s anti-disinformation effort with some “look-ahead” sense when handling further fake-news campaigns. It also encompasses recognising where a disinformation campaign is being “shaped” to a particular geopolitical area with that area’s history being weaved in to the messaging.

But whatever YouTube is doing may have limited effect if the purveyors of this kind of nonsense use other services to host this video content. This can manifest in alternative “free-speech” video hosting services like BitChute, DTube or PeerTube. Or it can be the content creator hosting the video content on their own Website, something that becomes more feasible as the kind of computing power needed for video hosting at scale becomes cheaper.

What is being raised is YouTube using their own resources to limit the spread of disinformation that is hosted on their own servers rather than looking at this issue holistically. But they are looking at issues like the ever-evolving message of disinformation that adapts to particular cultures along with using warning screens before such videos play.

This is compared to third-party-gatekeeper approaches like NewsGuard (HomeNetworking01.info coverage) where an independent third party scrutinises news content and sites then puts their results in a database. Here various forms of logic can work from this database to deny advertising to a site or cause a warning flag to be shown when users interact with that site.

But by realising that YouTube is being used as a host for fake news and disinformation videos, they are taking further action on this issue. This is even though Google will end up playing cat-and-mouse when it comes to disinformation campaigns.

The Spotify disinformation podcast saga could give other music streaming services a chance

Articles

Spotify Windows 10 Store port

Spotify dabbling in podcasts and strengthening its ties with podcasters is placing it at risk of carrying anti-vaxx and similar disinformation

Joni Mitchell joins Neil Young’s Spotify protest over anti-vax content | Joni Mitchell | The Guardian

Nils Lofgren Pulls Music From Spotify – Billboard

My Comments

Spotify has over the last two years jumping on the podcast-hosting wagon even though they were originally providing music on demand.

But just lately they were hosting the podcast output of Joe Rogan who is known for disinformation about COVID vaccines. They even strengthened their business relationship with Joe Rogan using the various content monetisation options they offer and giving it platform-exclusive treatment.

There has been social disdain about Spotify’s business relationship with Joe Rogan due to social responsibility issues relating to disinformation about essential issues such as vaccination. Neil Young and Joni Mitchell had pulled their music from this online music service and an increasing number of their fans are discontinuing business with Spotify. Now Nils Lofgren, the guitarist from the E Street Band associated with Bruce Springsteen is intending to pull music he has “clout” over from Spotify and encourages more musicians to do so.

Tim Burrowes, who founded Mumbrella, even said in his Unmade blog about the possibility of Spotify being subject to what happened with Sky News and Radio 2GB during the Alan Jones days. That was where one or more collective actions took place to drive advertisers to remove their business from these stations. This could be more so where companies have to be aware of brand safety and social responsibility when they advertise their wares.

In some cases, Apple, Google and Amazon could gain traction with their music-on-demand services. But on the other hand, Deezer, Qobuz and Tidal could gain an increased subscriber base especially where there is a desire to focus towards European business or to deal with music-focused media-on-demand services rather than someone who is running video or podcast services in addition.

There are questions about whether a music-streaming service like Spotify should be dabbling in podcasts and spoken-word content. That includes any form of “personalised-radio” services where music, advertising and spoken-word content presented in a manner akin to a local radio station’s output.

Then the other question that will come about is the expectation for online-audio-playback devices like network speakers, hi-fi network streamers and Internet radios. This would extend to other online-media devices like smart TVs or set-top boxes. Here, it is about allowing different audio-streaming services to be associated with these devices and assuring a simplified consistent user experience out of these services for the duration of the device’s lifespan.

That includes operation-by-reference setups like Spotify Connect where you can manage the music from the online music service via your mobile device, regular computer or similar device. But the music plays through your preferred set of speakers or audio device and isn’t interrupted if you make or take a call, receive a message or play games on your mobile device.

What has come about is the content hosted on an online-media platform or the content creators that the platform gives special treatment to may end up affecting that platform’s reputation. This is especially where the content creator is involved in fake news or disinformation.

Being aware of astroturfing as an insidious form of disinformation

Article

Astroturfing more difficult to track down with social media – academic | RNZ News

My Comments

An issue that is raised in the context of fake news and disinformation is a campaign tactic known as “astroturfing”. This is something that our online life has facilitated thanks to easy-to-produce Websites on affordable Web-hosting deals along with the Social Web.

I am writing about this on HomeNetworking01.info due to astroturfing as another form of disinformation that we are needing to be careful of in this online era.

What is astroturfing?

Astroturfing is organised propaganda activity intended to create a belief of popular grassroots support for a viewpoint in relationship to a cause or policy. This activity is organised by one or more large organisations with it typically appearing as the output of concerned individuals or smaller community organisations such as a peak body for small businesses of a kind.

But there is no transparency about who is actually behind the message or the benign-sounding organisations advancing that message. Nor is there any transparency about the money flow associated with the campaign.

The Merrian-Webster Dictionary which is the dictionary of respect for the American dialect of the English language defines it as:

organized activity that is intended to create a false impression of a widespread, spontaneously arising, grassroots movement in support of or in opposition to something (such as a political policy) but that is in reality initiated and controlled by a concealed group or organization (such as a corporation).

The etymology for this word comes about as a play on words in relation to the “grassroots” expression. It alludes to the Astroturf synthetic turf implemented initially in the Astrodome stadium in Houston in the USA., with the “Astroturf” trademark becoming a generic trademark for synthetic sportsground turf sold in North America.

This was mainly practised by Big Tobacco to oppose significant taxation and regulation measures against tobacco smoking, but continues to be practised by entities whose interests are against the public good.

How does astroturfing manifest?

It typically manifests as one or more benign-sounding community organisations that appear to demonstrate popular support for or against a particular policy. It typically affects policies for the social or environmental good where there is significant corporate or other “big-money” opposition to these policies.

The Internet era has made this more feasible thanks to the ability to create and host Websites for cheap. As online forums and social media came on board, it became feasible to set up multiple personas and organisational identities on forums and social-media platforms to make it appear as though many people or organisations are demonstrating popular support for the argument. It is also feasible to interlink Websites and online forums or Social-Web presences by posting a link from a Website or blog in a forum or Social-Web post or having articles on a Social Web account appear on one’s Website.

The multiple online personas created by one entity for this purpose of demonstrating the appearance of popular support are described as “sockpuppet” accounts. This is in reference to children’s puppet shows where two or three puppet actors use glove puppets made out of odd socks and can manipulate twice the number of characters as each actor. This can happen synchronously with a particular event that is in play, be it the effective date of an industry reform or set of restrictions; a court case or inquiry taking place; or a legislature working on an important law.

An example of this that occurred during the long COVID-19 lockdown that affected Victoria last year where the “DanLiedPeopleDied” and “DictatorDan” hashtags were manipulated on Twitter to create a sentiment of popular distrust against Dan Andrews.  Here it was identified that a significant number of the Twitter accounts that drove these hashtags surfaced or changed their behaviour synchronously to the lockdown’s effective period.

But astroturfing can manifest in to offline / in-real-life activities like rallies and demonstrations; appearances on talkback radio; letters to newspaper editors, pamphlet drops and traditional advertising techniques.

Let’s not forget that old-fashioned word-of-mouth advertising for an astroturfing campaign can take place here like over the neighbour’s fence, at the supermarket checkout or around the office’s water cooler.

Sometimes the online activity is used to rally for support for one or more offline activities or to increase the amount of word-of-mouth conversation on the topic. Or the pamphlets and outdoor advertising will carry references to the campaign’s online resources so people can find out more “from the horse’s mouth”. This kind of material used for offline promotion can be easily and cheaply produced using “download-to-print” resources, print and copy shops that use cost-effective digital press technology, firms who screen-print T-shirts on demand from digital originals amongst other online-facilitated technologies.

An example of this highlighted by Spectrum News 1 San Antonio in the USA was the protest activity against COVID-19 stay-at-home orders in that country. This was alluding to Donald Trump and others steering public opinion away from a COVID-safe USA.

This method of deceit capitalises on popular trust in the platform and the apparently-benign group behind the message or appearance of popular support for that group or its message. As well, astroturfing is used to weaken any true grassroots support for or against the opinion.

How does astroturfing affect media coverage of an issue?

The easily-plausible arguments tendered by a benign-sounding organisation can encourage journalists to “go with the flow” regarding the organisation’s ideas. It can include treating the organisation’s arguments at face value for a supporting or opposing view on the topic at hand especially where they want to create a balanced piece of material.

This risk is significantly increased in media environments where there isn’t a culture of critical thinking with obvious examples being partisan or tabloid media. Examples of this could be breakfast/morning TV talk shows on private free-to-air TV networks or talkback radio on private radio stations.

But there is a greater risk of this occurring while there is increasingly-reduced investment in public-service and private news media. Here, the fear of newsrooms being reduced or shut down or journalists not being paid much for their output can reduce the standard of journalism and the ability to perform proper due diligence on news sources.

There is also the risk of an astroturfing campaign affecting academic reportage of the issue. This is more so where the student doesn’t have good critical-thinking and research skills and can be easily swayed by spin. It is more so with secondary education or some tertiary education situations like vocational courses or people at an early stage in the undergraduate studies.

How does astroturfing affect healthy democracies

All pillars of government can and do fall victim to astroturfing. This can happen at all levels of government ranging from local councils through state or regional governments to the national governments.

During an election, an astroturfing campaign can be used to steer opinion for or against a political party or candidate who is standing for election. In the case of a referendum, it can steer popular opinion towards or against the questions that are the subject of the referendum. This is done in a manner to convey the veneer of popular grassroots support for or against the candidate, party or issue.

The legislature is often a hotbed of political lobbying by interest groups and astroturfing can be used to create a veneer of popular support for or against legislation or regulation of concern to the interest group. As well, astroturfing can be used a a tool to place pressure on legislature members to advance or stall a proposed law and, in some cases, force a government out of power where there is a stalemate over that law.

The public-service agencies of the executive government who have the power to permit or veto activity are also victims of astroturfing. This comes in the form of whether a project can go ahead or not; or whether a product is licensed for sale within the jurisdiction. It can also affect the popular trust in any measures that officials in the executive government execute.

As well, the judiciary can be tasked with handling legal actions launched by pressure groups who use astroturfing to  create a sense of popular support to revise legislation or regulation. It also includes how jurors are influenced in any jury trial or which judges are empanelled in a court of law, especially a powerful appellate court or the jurisdiction’s court of last resort.

Politicians, significant officials and key members of the judiciary can fall victim to character assassination campaigns that are part of one or more astroturfing campaigns. This can affect continual popular trust in these individuals and can even affect the ability for them to live or conduct their public business in safety.

Here, politicians and other significant government officials are increasingly becoming accessible to the populace. It is being facilitated by themselves maintaining Social-Web presence using a public-facing persona on the popular social-media platforms, with the same account-name or “handle” being used on the multiple platforms. In the same context, the various offices and departments maintain their social-Web presence on the popular platforms using office-wide accounts. This is in addition to other online presences like the ministerial Web pages or public-facing email addresses they or the government maintain.

These officials can be approached by interest groups who post to the official’s Social-Web presence. Or a reference can be created to the various officials and government entities through the use of hashtags or mentions of platform-native account names operated by these entities when someone creates a Social Web post about the official or issue at hand. In a lot of cases, there is reference to sympathetic journalists and media organisations in order to create media interest.

As well, one post with the right message and the right mix of hashtags and referenced account names can be viewed by the targeted decision makers and the populace at the same time. Then people who are sympathetic to that post’s message end up reposting that message, giving it more “heat”.

Here, the Social Web is seen as providing unregulated access to these powerful decision-makers. That is although the decision-makers work with personal assistants or similar staff to vet content that they see. As well, there isn’t any transparency about who is posting the content that references these officials i.e. you don’t know whether it is a local constituent or someone pressured by an interest group.

What can be done about it

The huge question here is what can be done about astroturfing as a means of disinformation.

A significant number of jurisdictions implement attribution requirements for any advertising or similar material as part of their fair-trading, election-oversight, broadcasting, unsolicited-advertising or similar laws. Similarly a significant number of jurisdictions implement lobbyist regulation in relationship to who has access to the jurisdiction’s politicians. As outlined in the RNZ article that I referred to, New Zealand is examining astroturfing in the context of whether they should regulate access to their politicians.

But most of these laws regulate what goes on within the offline space within the jurisdiction that they pertain to. It could become feasible for foreign actors to engage in astroturfing and similar campaigns from other territories across the globe using online means without any action being taken.

The issue of regulating lobbyist access to the jurisdiction’s politicians or significant officials can raise questions. Here it could be about whether the jurisdiction’s citizens have a continual right of access to their elected government or not. As well, there is the issue of assuring governmental transparency and a healthy dialogue with the citizens.

The 2016 fake-news crisis which highlighted the distortion of the 2016 US Presidential Election and UK Brexit referendum became a wake-up call regarding how the online space can be managed to work against disinformation.

Here, Silicon Valley took on the task of managing online search engines, social-media platforms and online advertising networks to regulate foreign influence and assure accountability when it comes to political messaging in the online space. This included identity verification of advertiser accounts or keeping detailed historical records of ads from political advertisers on ad networks or social media or clamping down on coordinated inauthentic behaviour on social media platforms.

In addition to this, an increasingly-large army of “fact-checkers” organised by credible newsrooms, universities and similar organisations appeared. These groups researched and verified claims which were being published through the media or on online platforms and would state whether they are true or false based on their research.

What we can do is research further and trust our instincts when it comes to questionable claims that come from apparently-benign organisations. Here we can do our due diligence and check for things like how long an online account has been in operation for, especially if it is synchronous to particular political, regulatory or similar events occurring or being on the horizon.

Here you have to look out for behaviours in the online or offline content like:

  • Inflammatory or manipulative language that plays on your emotions
  • Claims to debunk topic-related myths that aren’t really myths
  • Questioning or pillorying those exposing the wrongdoings core to the argument rather the actual wrongdoings
  • A chorus of the same material from many accounts

Conclusion

We need to be aware of astroturfing as another form of disinformation that is prevalent in the online age. Here it can take in people who are naive and accept information at face value without doing further research on what is being pushed.

Qualcomm to authenticate photos taken on your phone

Article

Android main interactive lock screen

Qualcomm will work towards authenticating photos taken by smartphones or other devices using its ARM silicon at the point of capture

One of the strongest ways to fight misinformation will soon be right in your phone | FastCompany

My Comments

The rise of deepfaked and doctored imagery surfacing on the Web and being used to corroborate lies has started an arms race to  verify the authenticity of audio and visual asset

It was encouraged by the Trusted News Initiative which is a group of leading newsrooms who want to set standards regarding the authenticity of news imagery and introduce watermarks for this purpose.

TruePic, an image authentication service, are partnering with Qualcomm to develop hardware-based authentication of images as they are being taken. Qualcomm has become the first manufacturer of choice because of themselves being involved with ARM-based silicon for most Android smartphones and the Windows 10 ARM platform.

This will use actual time and date, data gained from various device sensors and the image itself as it is taken to attach a certificate of authenticity to that image or video footage. This will be used to guarantee the authenticity of the photos or vision before they leave the user’s phone.

TruePic primarily implements this technology in industries like banking, insurance, warranty provision and law enforcement to work against fraudulent images being used to corroborate claims or to where imagery has to be of high forensic standards. But at the moment, Truepic implements this technology as an additional app that users have to install.

The partnership with Qualcomm is to integrate the functionality in to the smartphone’s camera firmware so that the software becomes more tamper-evident and this kind of authentication applies to all images captured by that sensor at the user’s discretion.

The fact that TruePic is partnering with Qualcomm at the moment is because most of the amateur photos are being taken with smartphones which use this kind of silicon. Once they have worked with Qualcomm, other camera chipmakers including Apple would need to collaborate with them to build in authenticated image technology in to their camera technology.

It can then appeal to implementation within standalone camera devices like traditional digital cameras, videosurveillance equipment, dashcams and the like. For example, it can be easier to verify media footage shot on pro gear as being authentic or to have videosurveillance footage being offered as evidence verified as being forensically accurate. But in these cases, there may be calls for the devices to be able to have access to highly-accurate time and location references for this to work.

The watermark generated by this technology will be intended to be machine-readable and packaged with the image file. This will make it easier for software to show whether the image is authentic or not and such software could be part of the Trusted News Initiative to authenticate amateur, stringer or other imagery or footage that comes in to a newsroom’s workflow. Or it could be used by eBay, Facebook or Tinder to indicate whether images or vision are a genuine representation of the goods for sale or the p

But this technology needs to also apply to images captured by dedicated digital cameras like this Canon PowerShot G1 X

rofile holder.

The idea of providing this function would be to offer it as an opt-in manner, typically as a shooting “mode” within a camera application. This allows the photographer to preserve their privacy. But the use of authenticated photos won’t allow users to digitally adjust their original photos to make them look better. This same situation may also apply to the use of digital zoom which effectively crops photos and videos at the time they are taken.

There is the idea of implementing distributed-ledger technology to track edits made to a photo. This can be used to track at what point the photo was edited and what kind of editing work took place. This kind of ledger technology could also apply to copies of that photo, which will be of importance where people save a copy of the image when they save any edits. This will also apply where a derivative work is created from the source file like a still image or a short clip is obtained from a longer file of existing footage.

A question that will then come about is how the time of day is recorded in these certificates, including the currently-effective time zone and whether the time is obtained from a highly-accurate reference. Such factors may put in to doubt the forensic accuracy of these certificates as far as when the photo or footage was actually taken.

For most of us, it could come in to its own when combatting deepfake and doctored images used to destabilise society. Those of us who use online dating or social-network platforms may use this to verify the authenticity of a person who is on that platform, thus working against catfishing. Similarly, the use of image authentication at the point of capture may come in to its own when we supply images or video to the media or to corroborate transactions.

Gizmodo examines the weaponisation of a Twitter hashtag

Article

How The #DanLiedPeopleDied Hashtag Reveals Australia’s ‘Information Disorder’ Problem | Gizmodo

My Comments

I read in Gizmodo how an incendiary hashtag directed against Daniel Andrews, the State Premier of Victoria in Australia, was pushed around the Twittersphere and am raising this as an article. It is part of keeping HomeNetworking01.info readers aware about disinformation tactics as we increasingly rely on the Social Web for our news.

What is a hashtag

A hashtag is a single keyword preceded by a hash ( # ) symbol that is used to identify posts within the Social Web that feature a concept. It was initially introduced in Twitter as a way of indexing posts created on that platform and make them easy to search by concept. But an increasing number of other social-Web platforms have enabled the use of hashtags for the same purpose. They are typically used to embody a slogan or idea in an easy-to-remember way across the social Web.

Most social-media platforms turn these hashtags in to a hyperlink that shows a filtered view of all posts featuring that hashtag. They even use statistical calculations to identify the most popular hashtags on that platform or the ones whose visibility is increasing and present this in meaningful ways like ranked lists or keyword clouds.

How this came about

Earlier on in the COVID-19 coronavirus pandemic, an earlier hashtag called #ChinaLiedPeopleDied was working the Social Web. This was underscoring a concept with a very little modicum of truth that the Chinese government didn’t come clear about the genesis of the COVID-19 plague with its worldwide death toll and their role in informing the world about it.

That hashtag was used to fuel Sinophobia hatred against the Chinese community and was one of the first symptoms of questionable information floating around the Social Web regarding COVID-19 issues.

Australia passed through the early months of the COVID-19 plague and one of their border-control measures for this disease was to have incoming travellers required to stay in particular hotels for a fortnight before they can roam around Australia as a quarantine measure. The Australian federal government put this program in the hands of the state governments but offered resources like the use of the military to these governments as part of its implementation.

The second wave of the COVID-19 virus was happening within Victoria and a significant number of the cases was to do with some of the hotels associated with the hotel quarantine program. This caused a very significant death toll and had the state government run it to a raft of very stringent lockdown measures.

A new hashtag called #DanLiedPeopleDied came about because it was deemed that the Premier, Daniel Andrews, as the head of the state’s executive government wasn’t perceived to have come clear about any and all bungles associated with its management of the hotel quarantine program.

On 14 July 2020, this hashtag first appeared in a Twitter account that initially touched on Egyptian politics and delivered its posts in the Arabic language. But it suddenly switched countries, languages and political topics, which is one of the symptoms of a Social Web account existing just to peddle disinformation and propaganda.

The hashtag had laid low until 12 August when a run of Twitter posts featuring it were delivered by hyper-partisan Twitter accounts. This effort, also underscored by newly-created or suspicious accounts that existed to bolster the messaging, was to make it register on Twitter’s systems as a “trending” hashtag.

Subsequently a far-right social-media influencer with a following of 116,000 Twitter accounts ran a post to keep the hashtag going. There was a lot of very low-quality traffic featuring that hashtag or its messaging. It also included a lot of low-effort memes being published to drive the hashtag.

The above-mentioned Gizmodo article has graphs to show how the hashtag appeared over time which is worth having a look at.

What were the main drivers

But a lot of the traffic highlighted in the article was driven by the use of new or inauthentic accounts which aren’t necessarily “bots” – machine operated accounts that provide programmatic responses or posts. Rather this is the handiwork of trolls or sockpuppets (multiple online personas that are perceived to be different but say the same thing).

As well, there was a significant amount of “gaming the algorithm” activity going on in order to raise the profile of that hashtag. This is due to most social-media services implementing algorithms to expose trending activity and populate the user’s main view.

Why this is happening

Like with other fake-news, disinformation and propaganda campaigns, the #DanLiedPeopleDied hashtag is an effort to sow seeds of fear, uncertainty and doubt while bringing about discord with information that has very little in the way of truth. As well the main goal is to cause a popular distrust in leadership figures and entities as well as their advice and efforts.

In this case, the campaign was targeted at us Victorians who were facing social and economic instability associated with the recent stay-at-home orders thanks to COVID-19’s intense reappearance, in order to have us distrust Premier Dan Andrews and the State Government even more. As such, it is an effort to run these kind of campaigns to people who are in a state of vulnerability, when they are less likely to use defences like critical thought to protect themselves against questionable information.

As I know, Australia is rated as one of the most sustainable countries in the world by the Fragile States Index, in the same league as the Nordic countries, Switzerland, Canada and New Zealand. It means that the country is known to be socially, politically and economically stable. But we can find that a targeted information-weaponisation campaign can be used to destabilise a country even further and we need to be sensitive to such tactics.

One of the key factors behind the problem of information weaponisation is the weakening of traditional media’s role in the dissemination of hard news. This includes younger people preferring to go to online resources, especially the Social Web, portals or news aggregator Websites for their daily news intake. It also includes many established newsrooms receiving reduced funding thanks to reduced advertising, subscription or government income, reducing their ability to pay staff to turn out good-quality news.

When we make use of social media, we need to develop a healthy suspicion regarding what is appearing. Beware of accounts that suddenly appear or develop chameleon behaviours especially when key political events occur around the world. Also be careful about accounts that “spam” their output with a controversial hashtag or adopt a “stuck record” mentality over a topic.

Conclusion

Any time where a jurisdiction is in a state of turmoil is where the Web, especially the Social Web, can be a tool of information warfare. When you use it, you need to be on your guard about what you share or which posts you interact with.

Here, do research on hashtags that are suddenly trending around a social-media platform and play on your emotions and be especially careful of new or inauthentic accounts that run these hashtags.

Study confirms content-recommendation engines can further personal biases

YouTube recommendation list

Content recommendation engines on the likes of YouTube can easily lead viewers down a content rabbit hole if they are not careful

Article

This website lets you see how conspiracy theorists fall down the YouTube rabbit hole | Mashable

My Comments

Increasingly a lot of online services, be they social media services, news-aggregation portals, video streaming services and the like, are using algorithms to facilitate the exposure of undiscovered content to their users. It is part of their vision to effectively create a customised user experience for each person who uses these services and is part of an Internet-driven concept of “mass customisation”.

Those of you who use Netflix may find that your newsletter that they send you has movie recommendations that are based on what you are watching. You will also see on the Netflix screen a “recommendations” playlist with movies that are similar to what you have been watching through that service.

A very common example of this is YouTube with its recommended-content lists such as what to view next or what channels to subscribe to. Here a lot of the content that is viewed on YouTube is the result of viewers using the service’s personalised content recommendations.

The issue being raised regarding these algorithms is how they can perpetuate a personal “thought bubble”. It is even though there is other material available on the online service that may not mesh with that “bubble”. Typically this is through surfacing content that amplifies what the viewer has seen previously and can pander to their own biases.

An online experiment created by a Web developer and funded by the Mozilla Foundation explores this concept further in context with YouTube. This experiment, called “TheirTube”, emulates the YouTube content-discovery and viewing habits of six different personalities like conspiracists, conservative thinkers and climate deniers when they view content related to their chosen subjects.

Here, it shows up what is recommended in relationship to content to view next or channels to subscribe to for these different personalities and shows how the content recommendation engine can be used to underscore or amplify particular viewpoints.

It is a common problem associated with the artificial-intelligence / machine-learning approach associated with content recommendation that these services use. This is due to the end-user “seeding” the algorithms with the content that they actually interact with or the logical content sources they actually follow. Here, the attributes associated with the content effectively determine the “rules” the algorithm works on.

If you are trying to maintain some sort of critical thinking and use content services like YouTube for informational content, you may have to rely less on the content-recommendation engine that they use for finding new content. You may find it useful to manually seek out content with a contrasting viewpoint to avoid the creation of a “thought bubble”.

As well, if you follow the online-service’s recommendations in addition to running contrasting content through the online service, you may be in a position to make the content recommendation engine bring up varied content.

The idea of content recommendation engines that are based on what you choose can allow us to be easily cocooned in a content bubble that perpetuates personal biases.

A digital watermark to identify the authenticity of news photos

Articles

ABC News 24 coronavirus coverage

The news services that appear on the “screen of respect” that is main TV screen like the ABC are often seen as being “of respect” and all the screen text is part of their identity

TNI steps up fight against disinformation  | Advanced Television

News outlets will digitally watermark content to limit misinformation | Engadget

News Organizations Will Start Using Digital Watermarks To Combat Fake News |Ubergizmo

My Comments

The Trusted New Initiative are a recently formed group of global news and tech organisations, mostly household names in these fields, who are working together to stop the spread of disinformation where it poses a risk of real-world harm. It also includes flagging misinformation that undermines trust the the TNI’s partner news providers like the BBC. Here, the online platforms can review the content that comes in, perhaps red-flagging questionable content, and newsrooms avoid blindly republishing it.

ABC News website

.. as well as their online presence – they will benefit from having their imagery authenticated by a TNI watermark

One of their efforts is to agree on and establish an early-warning system to combat the spread of fake news and disinformation. It is being established in the months leading up to the polling day for the US Presidential Election 2020 and is flagging disinformation were there is an immediate threat to life or election integrity.

It is based on efforts to tackle disinformation associated with the 2019 UK general election, the Taiwan 2020 general election, and the COVID-19 coronavirus plague.

Another tactic is Project Origin, which this article is primarily about.

An issue often associated with fake news and disinformation is the use of imagery and graphics to make the news look credible and from a trusted source.

Typically this involves altered or synthesised images and vision that is overlaid with the logos and other trade dress associated with BBC, CNN or another newsroom of respect. This conveys to people who view this online or on TV that the news is for real and is from a respected source.

Project Origin is about creating a watermark for imagery and vision that comes from a particular authentic content creator. This will degrade whenever the content is manipulated. It will be based around open standards overseen by TNI that relate to authenticating visual content thus avoiding the need to reinvent the wheel when it comes to developing any software for this to work.

One question I would have is whether it is only readable by computer equipment or if there is a human-visible element like the so-called logo “bug” that appears in the corner of video content you see on TV. If this is machine-readable only, will there be the ability for a news publisher or broadcaster to overlay a graphic or message that states the authenticity at the point of publication. Similarly, would a Web browser or native client for an online service have extra logic to indicate the authenticity of an image or video footage.

I would also like to see the ability to indicate the date of the actual image or footage being part of the watermark. This is because some fake news tends to be corroborated with older lookalike imagery like crowd footage from a similar but prior event to convince the viewer. Some of us may also look at the idea of embedding the actual or approximate location of the image or footage in the watermark.

There is also the issue of newsrooms importing images and footage from other sources whose equipment they don’t have control over. For example, an increasing amount of amateur and videosurveillance imagery is used in the news usually because the amateur photographer or the videosurveillance setup has the “first images” of the news event. Then there is reliance on stock-image libraries and image archives for extra or historical footage; along with newsrooms and news / PR agencies sharing imagery with each other. Let’s not forget media companies who engage “stringers” (freelance photographers and videographers) who supply images and vision taken with their own equipment.

The question with all this, especially with amateur / videosurveillance / stringer footage taken with equipment that media organisations don’t have control over is how such imagery can be authenticated by a newsroom. This is more so where the image just came off a source like someone’s smartphone or the DVR equipment within a premises’ security room. There is also the factor that one source could tender the same imagery to multiple media outlets, whether through a media-relations team or simply offering it around.

At least Project Origin will be useful as a method to allow the audience to know the authenticity and provenance of imagery that is purported to corroborate a newsworthy event.

ABC touches on fake news and disinformation in an educational video series

Video Series TV, VHS videocassette recorder and rented video movies

Australian Broadcasting Corporation

Behind The News – Media Literacy Series

How To Spot Fake News (Click or tap to play in YouTube)

Which News Sources Can Be Trusted (Click or tap to play in YouTube)

What Makes News, News (Click or tap to play on YouTube)

How To Spot Bias In The Media (Click or tap to play on YouTube)

Dishonesty, Accuracy And Ethics In The Media (Click or tap to play on YouTube)

My Comments

Regularly, I cover on HomeNetworking01.info the issue of fake news and disinformation. This is because of our consumption of news and information being part of our online lives thanks to the ubiquity and affordability of the Intermet.

I have highlighted the use of online sources like social media, Web portals, search engines or news aggregators as our regular news sources along with the fact that it is very easy to spread rumour and disinformation around the world thanks to the ease of publishing the Web provides. As well, it is easy for our contacts to spread links to Web resources or iterate messages in these resources via the Social Web, emails or instant-messaging platforms.

This issue has become of concern since 2016 when fake news and disinformation circulating around the Web was used to distort the outcome of the UK’s Brexit referendum and the US election that brought Donald Trump in to the presidency of that country.

Kogan Internet table radioSince then, I have covered efforts by the tech industry and others to make us aware of fake news, disinformation and propaganda such as through the use of fact-checkers or online services implementing robust data-security and account-management policies and procedures. It also includes articles that encourage the use of good-quality traditional media sources during critical times like national elections or the coronavirus and I even see the issue of being armed against fake news and disinformation as part of data security.

The ABC have run a video series as part of their “Behind The News” schools-focused TV show about the media which underscores the value of media literacy and discerning the calibre of news that is being presented. On Tuesday 6 July 2020, I watched “The Drum” and one of the people behind this series described it as being highly relevant viewing for everyone no matter how old we are thanks to the issue of fake news and disinformation being spread around the Web.

It is part of their continued media-literacy efforts like their “Media Watch” TV series run on Monday nights which highlights and keeps us aware of media trends and issues.

In that same show, they even recommended that if we do post something that critiques a piece of fake news or disinformation, we were to reference the material with a screenshot rather than sharing a link to the content. This is because interactions, link shares and the like are often used as a way to “game” social-network and search-engine algorithms, making it easier to discover the questionable material.

The first video looked at how and why fake news has been spread over the ages such as to drive newspaper sales or listenership and viewership of broadcasts. It also touched on how such news is spread including taking advantage of “thought and social bubbles” that we establish. As well, one of the key issues that was highlighted was fact that fake news tends to be out of touch with reality and to encourage us to research further about the article and who is behind it before taking it as gospel and sharing it further.

This second video of the series that touches on the quality of news and information sources that can be used to drive a news story. It examines the difference between the primary sources that provide first-had information “from the horse’s mouth” and secondary sources that evaluate or interpret the information.

It also touches on whether the news source is relying primarily on secondary sources or hearsay vs expert or authoritative testimony. They raise the legitimacy of contrasting opinion like academic debate or where there isn’t enough real knowledge on the topic. But we were warned about news sources that are opinion-dominant rather than fact-dominant. Even issues like false equivalence, bias or use of anonymous sources were identified along with the reason behind the source presenting the information to the journalist or newsroom.

This video even summed up how we assess news sources by emphasising the CRAP rule – Currency (how recent the news is), Reliability (primary vs secondary source as well as reliability of the source), Authority (is the source authoritative on the topic) and Purpose (why was the news shared such as biases or fact vs opinion).

The third video in the series talks about what makes information newsworthy. This depends on who is reporting it and the consumers who will benefit from the information. It also covered news value like timeliness, frequency of occurrence, cultural proximity, the existence of people in the public spotlight along with factors like conflict or the tone of the story. It completely underscored why and how you are told of information that could shape your view of the world.

The fourth video looks at bias within the media and why it is there. The reasons that were called out include to influence the way we think or vote or what goods or services we buy. It also includes keeping the media platform’s sponsors or commercial partners in a positive light, a media platform building an increasingly-large army of loyal consumers, or simply to pander to their audience’s extant biases.

It also looked at personal biases that affect what we consume and how we consume it, including the “I-am-right” confirmation bias, also known as the “rose-tinted glasses” concept. Even looking at how people or ideas are represented on a media platform, what kind of stories appear in that platform’s output including what gets top place; along with how the stories are told with both pictures and words can highlight potential biases. There was also the fact that a personal bias can be influenced to determine what we think of a media outlet.

The last of the videos looks at honesty, accuracy and ethics within the realm of media. It underscores key values like honest, accurate, fair, independent and respectful reporting along with various checks and balances that the media is subject to. Examples of these include the protections that the law of the land offers like the tort of defamation, contempt of court and similar crimes that protect the proper role of the courts of justice, hatred-of-minority-group offences and right-to-privacy offences. There is also the oversight offered by entities like broadcast-standards authorities and press councils who have effective clout.

The legal references that were highlighted were primarily based on what happens within Australia while British viewers may see something very similar there due to implied freedom of speech along with similarly-rigourous defamation laws. The USA may take slightly different approaches especially where they rely on the First Amendment of their Constitution that grants an express freedom of speech.

But it sums up the role of media as a check on the powerful and its power to shine a light on what needs to be focused on. The series also looks at how we can’t take the media for granted and need to be aware of the way the news appears on whatever media platform we use. This is although there is primary focus on traditional media but in some ways it can also be a way to encourage us to critically assess online media resources.

The video series underscores what the news media is about and covers this issue in a platform-agnostic manner so we don’t consider a particular media platform or type as a purveyor of fake or questionable news. As well, the series presents the concepts in a simple-to-understand manner but with the use of dramatisation in order to grab and keep our attention.

Here, I often wonder whether other public-service or community broadcasters are running a similar media-literacy video program that can be pitched at all age levels in order to encourage the communities they reach to be astute about the media they come across.