Tag: disinformation

Australian Electoral Commission takes to Twitter to rebut election disinformation

Articles Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Don’t Spread Disinformation on Twitter or the AEC Will Roast You (gizmodo.com.au)

Federal Election 2022: How the AEC Is Preparing to Combat Misinformation (gizmodo.com.au)

From the horse’s mouth

Australian Electoral Commission

AEC launches disinformation register ahead of 2022 poll (Press Release)

Previous coverage on HomeNetworking01.info

Being cautious about fake news and misinformation in Australia

My Comments

This next 18 months is to be very significant as far as general elections in Australia go. It is due to a Federal election and state elections in the most populous states taking place during that time period i.e. this year has the Federal election having to take place by May and the Victorian state election taking place by November, then the New South Wales state election taking place by March 2023.

Democracy sausages prepared at election day sausage sizzle


Two chances over the next 18 months to benefit from the democracy sausage as you cast your vote
Kerry Raymond, CC BY 4.0 <https://creativecommons.org/licenses/by/4.0>, via Wikimedia Commons

Oh yeah, more chances to eat those democracy sausages available at that school’s sausage sizzle after you cast that vote. But the campaign machine has started up early this year at the Federal level with United Australia Party ads appearing on commercial TV since the Winter Olympics, yard signs from various political parties appearing in my local neigbbourhood and an independent candidate for the Kooyong electorate running ads online through Google AdSense with some of those ads appearing on HomeNetworking01.info. This is even before the Governor General had served the necessary writs to dissolve the Federal Parliament and commence the election cycle.

Ged Kearney ALP candidate yard sign

The campaigns are underway even before the election is called

This season will be coloured with the COVID coronavirus plague and the associated vaccination campaigns, lockdowns and other public-health measures used to mitigate this virus. This will exacerbate Trump-style disinformation campaigns affecting the Australian electoral process, especially from anti-vaccination / anti-public-health-measure groups.

COVID will also exacerbate issues regarding access to the vote in a safe manner. This includes dealing with people who are isolated or quarantined due to them or their household members being struck down by the disease or allowing people on the testing and vaccination front lines to cast their vote. Or it may be about running the polling booths in a manner that is COVID-safe and assures the proper secret ballot.

There is also the recent flooding that is taking place in Queensland and NSW with it bringing about questions regarding access to the vote for affected communities’ and volunteers helping those communities. All these situations would depend on people knowing where and how to cast “convenience votes” like early or postal votes, or knowing where the nearest polling booth is especially with the flooding situation rendering the usual booths in affected areas out of action.

The Australian Electoral Commission who oversees elections at a federal level have established a register to record fake-news and disinformation campaigns that appear online to target Australians. They will also appear at least on Twitter to debunk disinformation that is swirling around on that platform and using common hashtags associated with Australian politics and elections.

Add to this a stronger wider “Stop And Consider” campaign to encourage us to be mindful about what we see, hear or read regarding the election. This is based on their original campaign ran during the 2019 Federal election to encourage us to be careful about what we share online. Here, that was driven by that Federal election being the first of its kind since we became aware of online fake-news and disinformation campaigns and their power to manipulate the vote.

There will be a stronger liasion with the AEC and the online services in relation to sharing intelligence about disinformation campaigns.

But the elephant in the room regarding election safety is IT security and cyber safety for a significant number of IT systems that would see a significant amount of election-related data being created or modified through this season.

Service Victoria contact-tracing QR code sign at Fairfield Primary School

Even the QR-code contact-tracing platforms used by state governments as part of their COVID management efforts have to be considered as far as IT security for an election is concerned – like this one at a school that is likely to be a polling place

This doesn’t just relate to the electoral oversight bodies but any government, media or civil-society setup in place during the election.

That would encompass things ranging from State governments wanting to head towards fully-electronic voter registration and electoral-roll mark-off processes, through the politicians and political parties’ IT that they use for their business process, the state-government QR-code contact tracing platforms regularly used by participants during this COVID-driven era, to the IT operated by the media and journalists themselves to report the election. Here, it’s about the safety of the participants in the election process, the integrity of the election process and the ability for voters to make a proper and conscious choice when they cast their vote.

Such systems have a significant number of risks associated with their data such as cyber attacks intended to interfere with or exfiltrate data or slow down the performance of these systems. It is more so where the perpetrators of this activity extends to adverse nation states or organised crime anywhere in the world. As well, interference with these IT systems is used as a way to create and disseminate fake news, disinformation and propaganda.

But the key issue regarding Australia’s elections being safe from disinformation and election interference is for us to be media-savvy. That includes being aware of material that plays on your emotions; being aware of bias in media and other campaigns; knowing where sources of good-quality and trustworthy news are; and placing importance on honesty, accuracy and ethics in the media.

Here, it may be a good chance to look at the “Behind The News” media-literacy TV series the ABC produced during 2020 regarding the issue of fake news and disinformation.  Sometimes you may also find that established media, especially the ABC and SBS or the good-quality newspapers may be the way to go for reliable election information. Even looking at official media releases “from the horse’s mouth” at government or political-party Websites may work as a means to identify exaggeration that may be taking place.

Having the various stakeholders encourage media literacy and disinformation awareness, along with government and other entities taking a strong stance with cyber security can be a way to protect this election season.

YouTube to examine further ways to control misinformation

Article

YouTube recommendation list

YouTube to further crack down on misinformation using warning screens and other strategies

YouTube Eyes New Ways to Stop Misinformation From Spreading Beyond Its Reach – CNET

From the horse’s mouth

YouTube

Inside Responsibility: What’s next on our misinfo efforts (Blog Post)

My Comments

YouTube’s part in controlling the spread of repeated disinformation has been found to be very limited in some ways.

This was focused on managing accounts and channels (collections of YouTube videos submitted by a YouTube account holder and curated by that holder) in a robust manner like implementing three-strikes policies when repeated disinformation occurs. It extended to managing the content recommendation engine in order to effectively “bury” that kind of content from end-users’ default views.

But new other issues have come up in relation to this topic. One of these is to continually train the artificial-intelligence / machine-learning subsystems associated with how YouTube operates with new data that represents newer situations. This includes the use of different keywords and different languages.

Another approach that will fly in the face of disinformation purveyors is to point end-users to authoritative resources relating to the topic at hand. This will typically manifest in lists of hyperlinks to text and video resources from sources of respect when there is a video or channel that has questionable material.

But a new topic or new angle on an existing topic can yield a data-void where there is scant or no information on the topic from respectable resources. This can happen when there is a fast-moving news event and is fed by the 24-hour news cycle.

Another issue is where someone creates a hyperlink to or embeds a YouTube video in their online presence. This is a common way to put YouTube video content “on the map” and can cause a video to go viral by acquiring many views. In some cases like “communications-first” messaging platforms such as SMS/MMS or instant-messaging, a preview image of the video will appear next to a message that has a link to that video.

Initially YouTube looked at the idea of preventing a questionable resource from being shared through the platform’s user interface. But questions were raised about this including limiting a viewer’s freedoms regarding taking the content further.

The issue that wasn’t even raised is the fact that the video can be shared without going via YouTube’s user interface. This can be through other means like copying the URL in the address bar if viewing on a regular computer or invoking the “share” intent on modern desktop and mobile operating systems to facilitate taking it further. In some operating systems, that can extend to printing out material or “throwing” image or video material to the large screen TV using a platform like Apple TV or Chromecast. Add to this the fact that a user will want to share the video with others as part of academic research or news report.

Another approach YouTube is looking at is based on an age-old approach implemented by responsible TV broadcasters or by YouTube with violent age-restricted or other questionable content. That is to show a warning screen, sometimes accompanied with an audio announcement, before the questionable content plays. Most video-on-demand services will implement an interactive approach at least in their “lean-forward” user interfaces where the viewer has to assent to the warning before they see any of that content.

In this case, YouTube would run a warning screen regarding the existence of disinformation in the video content before the content plays. Such an approach would make us aware of the situation and act as a “speed bump” against continual consumption of that content or following through on hyperlinks to such content.

Another issue YouTube is working on is keeping its anti-disinformation efforts culturally relevant. This scopes in various nations’ historical and political contexts, whether a news or information source is an authoritative independent source or simply a propaganda machine, fact-checking requirements, linguistic issues amongst other things. The historical and political issue could include conflicts that had peppered the nation’s or culture’s history or how the nation changed governments.

Having support for relevance to various different cultures provides YouTube’s anti-disinformation effort with some “look-ahead” sense when handling further fake-news campaigns. It also encompasses recognising where a disinformation campaign is being “shaped” to a particular geopolitical area with that area’s history being weaved in to the messaging.

But whatever YouTube is doing may have limited effect if the purveyors of this kind of nonsense use other services to host this video content. This can manifest in alternative “free-speech” video hosting services like BitChute, DTube or PeerTube. Or it can be the content creator hosting the video content on their own Website, something that becomes more feasible as the kind of computing power needed for video hosting at scale becomes cheaper.

What is being raised is YouTube using their own resources to limit the spread of disinformation that is hosted on their own servers rather than looking at this issue holistically. But they are looking at issues like the ever-evolving message of disinformation that adapts to particular cultures along with using warning screens before such videos play.

This is compared to third-party-gatekeeper approaches like NewsGuard (HomeNetworking01.info coverage) where an independent third party scrutinises news content and sites then puts their results in a database. Here various forms of logic can work from this database to deny advertising to a site or cause a warning flag to be shown when users interact with that site.

But by realising that YouTube is being used as a host for fake news and disinformation videos, they are taking further action on this issue. This is even though Google will end up playing cat-and-mouse when it comes to disinformation campaigns.

The Spotify disinformation podcast saga could give other music streaming services a chance

Articles

Spotify Windows 10 Store port

Spotify dabbling in podcasts and strengthening its ties with podcasters is placing it at risk of carrying anti-vaxx and similar disinformation

Joni Mitchell joins Neil Young’s Spotify protest over anti-vax content | Joni Mitchell | The Guardian

Nils Lofgren Pulls Music From Spotify – Billboard

My Comments

Spotify has over the last two years jumping on the podcast-hosting wagon even though they were originally providing music on demand.

But just lately they were hosting the podcast output of Joe Rogan who is known for disinformation about COVID vaccines. They even strengthened their business relationship with Joe Rogan using the various content monetisation options they offer and giving it platform-exclusive treatment.

There has been social disdain about Spotify’s business relationship with Joe Rogan due to social responsibility issues relating to disinformation about essential issues such as vaccination. Neil Young and Joni Mitchell had pulled their music from this online music service and an increasing number of their fans are discontinuing business with Spotify. Now Nils Lofgren, the guitarist from the E Street Band associated with Bruce Springsteen is intending to pull music he has “clout” over from Spotify and encourages more musicians to do so.

Tim Burrowes, who founded Mumbrella, even said in his Unmade blog about the possibility of Spotify being subject to what happened with Sky News and Radio 2GB during the Alan Jones days. That was where one or more collective actions took place to drive advertisers to remove their business from these stations. This could be more so where companies have to be aware of brand safety and social responsibility when they advertise their wares.

In some cases, Apple, Google and Amazon could gain traction with their music-on-demand services. But on the other hand, Deezer, Qobuz and Tidal could gain an increased subscriber base especially where there is a desire to focus towards European business or to deal with music-focused media-on-demand services rather than someone who is running video or podcast services in addition.

There are questions about whether a music-streaming service like Spotify should be dabbling in podcasts and spoken-word content. That includes any form of “personalised-radio” services where music, advertising and spoken-word content presented in a manner akin to a local radio station’s output.

Then the other question that will come about is the expectation for online-audio-playback devices like network speakers, hi-fi network streamers and Internet radios. This would extend to other online-media devices like smart TVs or set-top boxes. Here, it is about allowing different audio-streaming services to be associated with these devices and assuring a simplified consistent user experience out of these services for the duration of the device’s lifespan.

That includes operation-by-reference setups like Spotify Connect where you can manage the music from the online music service via your mobile device, regular computer or similar device. But the music plays through your preferred set of speakers or audio device and isn’t interrupted if you make or take a call, receive a message or play games on your mobile device.

What has come about is the content hosted on an online-media platform or the content creators that the platform gives special treatment to may end up affecting that platform’s reputation. This is especially where the content creator is involved in fake news or disinformation.

Being aware of astroturfing as an insidious form of disinformation

Article

Astroturfing more difficult to track down with social media – academic | RNZ News

My Comments

An issue that is raised in the context of fake news and disinformation is a campaign tactic known as “astroturfing”. This is something that our online life has facilitated thanks to easy-to-produce Websites on affordable Web-hosting deals along with the Social Web.

I am writing about this on HomeNetworking01.info due to astroturfing as another form of disinformation that we are needing to be careful of in this online era.

What is astroturfing?

Astroturfing is organised propaganda activity intended to create a belief of popular grassroots support for a viewpoint in relationship to a cause or policy. This activity is organised by one or more large organisations with it typically appearing as the output of concerned individuals or smaller community organisations such as a peak body for small businesses of a kind.

But there is no transparency about who is actually behind the message or the benign-sounding organisations advancing that message. Nor is there any transparency about the money flow associated with the campaign.

The Merrian-Webster Dictionary which is the dictionary of respect for the American dialect of the English language defines it as:

organized activity that is intended to create a false impression of a widespread, spontaneously arising, grassroots movement in support of or in opposition to something (such as a political policy) but that is in reality initiated and controlled by a concealed group or organization (such as a corporation).

The etymology for this word comes about as a play on words in relation to the “grassroots” expression. It alludes to the Astroturf synthetic turf implemented initially in the Astrodome stadium in Houston in the USA., with the “Astroturf” trademark becoming a generic trademark for synthetic sportsground turf sold in North America.

This was mainly practised by Big Tobacco to oppose significant taxation and regulation measures against tobacco smoking, but continues to be practised by entities whose interests are against the public good.

How does astroturfing manifest?

It typically manifests as one or more benign-sounding community organisations that appear to demonstrate popular support for or against a particular policy. It typically affects policies for the social or environmental good where there is significant corporate or other “big-money” opposition to these policies.

The Internet era has made this more feasible thanks to the ability to create and host Websites for cheap. As online forums and social media came on board, it became feasible to set up multiple personas and organisational identities on forums and social-media platforms to make it appear as though many people or organisations are demonstrating popular support for the argument. It is also feasible to interlink Websites and online forums or Social-Web presences by posting a link from a Website or blog in a forum or Social-Web post or having articles on a Social Web account appear on one’s Website.

The multiple online personas created by one entity for this purpose of demonstrating the appearance of popular support are described as “sockpuppet” accounts. This is in reference to children’s puppet shows where two or three puppet actors use glove puppets made out of odd socks and can manipulate twice the number of characters as each actor. This can happen synchronously with a particular event that is in play, be it the effective date of an industry reform or set of restrictions; a court case or inquiry taking place; or a legislature working on an important law.

An example of this that occurred during the long COVID-19 lockdown that affected Victoria last year where the “DanLiedPeopleDied” and “DictatorDan” hashtags were manipulated on Twitter to create a sentiment of popular distrust against Dan Andrews.  Here it was identified that a significant number of the Twitter accounts that drove these hashtags surfaced or changed their behaviour synchronously to the lockdown’s effective period.

But astroturfing can manifest in to offline / in-real-life activities like rallies and demonstrations; appearances on talkback radio; letters to newspaper editors, pamphlet drops and traditional advertising techniques.

Let’s not forget that old-fashioned word-of-mouth advertising for an astroturfing campaign can take place here like over the neighbour’s fence, at the supermarket checkout or around the office’s water cooler.

Sometimes the online activity is used to rally for support for one or more offline activities or to increase the amount of word-of-mouth conversation on the topic. Or the pamphlets and outdoor advertising will carry references to the campaign’s online resources so people can find out more “from the horse’s mouth”. This kind of material used for offline promotion can be easily and cheaply produced using “download-to-print” resources, print and copy shops that use cost-effective digital press technology, firms who screen-print T-shirts on demand from digital originals amongst other online-facilitated technologies.

An example of this highlighted by Spectrum News 1 San Antonio in the USA was the protest activity against COVID-19 stay-at-home orders in that country. This was alluding to Donald Trump and others steering public opinion away from a COVID-safe USA.

This method of deceit capitalises on popular trust in the platform and the apparently-benign group behind the message or appearance of popular support for that group or its message. As well, astroturfing is used to weaken any true grassroots support for or against the opinion.

How does astroturfing affect media coverage of an issue?

The easily-plausible arguments tendered by a benign-sounding organisation can encourage journalists to “go with the flow” regarding the organisation’s ideas. It can include treating the organisation’s arguments at face value for a supporting or opposing view on the topic at hand especially where they want to create a balanced piece of material.

This risk is significantly increased in media environments where there isn’t a culture of critical thinking with obvious examples being partisan or tabloid media. Examples of this could be breakfast/morning TV talk shows on private free-to-air TV networks or talkback radio on private radio stations.

But there is a greater risk of this occurring while there is increasingly-reduced investment in public-service and private news media. Here, the fear of newsrooms being reduced or shut down or journalists not being paid much for their output can reduce the standard of journalism and the ability to perform proper due diligence on news sources.

There is also the risk of an astroturfing campaign affecting academic reportage of the issue. This is more so where the student doesn’t have good critical-thinking and research skills and can be easily swayed by spin. It is more so with secondary education or some tertiary education situations like vocational courses or people at an early stage in the undergraduate studies.

How does astroturfing affect healthy democracies

All pillars of government can and do fall victim to astroturfing. This can happen at all levels of government ranging from local councils through state or regional governments to the national governments.

During an election, an astroturfing campaign can be used to steer opinion for or against a political party or candidate who is standing for election. In the case of a referendum, it can steer popular opinion towards or against the questions that are the subject of the referendum. This is done in a manner to convey the veneer of popular grassroots support for or against the candidate, party or issue.

The legislature is often a hotbed of political lobbying by interest groups and astroturfing can be used to create a veneer of popular support for or against legislation or regulation of concern to the interest group. As well, astroturfing can be used a a tool to place pressure on legislature members to advance or stall a proposed law and, in some cases, force a government out of power where there is a stalemate over that law.

The public-service agencies of the executive government who have the power to permit or veto activity are also victims of astroturfing. This comes in the form of whether a project can go ahead or not; or whether a product is licensed for sale within the jurisdiction. It can also affect the popular trust in any measures that officials in the executive government execute.

As well, the judiciary can be tasked with handling legal actions launched by pressure groups who use astroturfing to  create a sense of popular support to revise legislation or regulation. It also includes how jurors are influenced in any jury trial or which judges are empanelled in a court of law, especially a powerful appellate court or the jurisdiction’s court of last resort.

Politicians, significant officials and key members of the judiciary can fall victim to character assassination campaigns that are part of one or more astroturfing campaigns. This can affect continual popular trust in these individuals and can even affect the ability for them to live or conduct their public business in safety.

Here, politicians and other significant government officials are increasingly becoming accessible to the populace. It is being facilitated by themselves maintaining Social-Web presence using a public-facing persona on the popular social-media platforms, with the same account-name or “handle” being used on the multiple platforms. In the same context, the various offices and departments maintain their social-Web presence on the popular platforms using office-wide accounts. This is in addition to other online presences like the ministerial Web pages or public-facing email addresses they or the government maintain.

These officials can be approached by interest groups who post to the official’s Social-Web presence. Or a reference can be created to the various officials and government entities through the use of hashtags or mentions of platform-native account names operated by these entities when someone creates a Social Web post about the official or issue at hand. In a lot of cases, there is reference to sympathetic journalists and media organisations in order to create media interest.

As well, one post with the right message and the right mix of hashtags and referenced account names can be viewed by the targeted decision makers and the populace at the same time. Then people who are sympathetic to that post’s message end up reposting that message, giving it more “heat”.

Here, the Social Web is seen as providing unregulated access to these powerful decision-makers. That is although the decision-makers work with personal assistants or similar staff to vet content that they see. As well, there isn’t any transparency about who is posting the content that references these officials i.e. you don’t know whether it is a local constituent or someone pressured by an interest group.

What can be done about it

The huge question here is what can be done about astroturfing as a means of disinformation.

A significant number of jurisdictions implement attribution requirements for any advertising or similar material as part of their fair-trading, election-oversight, broadcasting, unsolicited-advertising or similar laws. Similarly a significant number of jurisdictions implement lobbyist regulation in relationship to who has access to the jurisdiction’s politicians. As outlined in the RNZ article that I referred to, New Zealand is examining astroturfing in the context of whether they should regulate access to their politicians.

But most of these laws regulate what goes on within the offline space within the jurisdiction that they pertain to. It could become feasible for foreign actors to engage in astroturfing and similar campaigns from other territories across the globe using online means without any action being taken.

The issue of regulating lobbyist access to the jurisdiction’s politicians or significant officials can raise questions. Here it could be about whether the jurisdiction’s citizens have a continual right of access to their elected government or not. As well, there is the issue of assuring governmental transparency and a healthy dialogue with the citizens.

The 2016 fake-news crisis which highlighted the distortion of the 2016 US Presidential Election and UK Brexit referendum became a wake-up call regarding how the online space can be managed to work against disinformation.

Here, Silicon Valley took on the task of managing online search engines, social-media platforms and online advertising networks to regulate foreign influence and assure accountability when it comes to political messaging in the online space. This included identity verification of advertiser accounts or keeping detailed historical records of ads from political advertisers on ad networks or social media or clamping down on coordinated inauthentic behaviour on social media platforms.

In addition to this, an increasingly-large army of “fact-checkers” organised by credible newsrooms, universities and similar organisations appeared. These groups researched and verified claims which were being published through the media or on online platforms and would state whether they are true or false based on their research.

What we can do is research further and trust our instincts when it comes to questionable claims that come from apparently-benign organisations. Here we can do our due diligence and check for things like how long an online account has been in operation for, especially if it is synchronous to particular political, regulatory or similar events occurring or being on the horizon.

Here you have to look out for behaviours in the online or offline content like:

  • Inflammatory or manipulative language that plays on your emotions
  • Claims to debunk topic-related myths that aren’t really myths
  • Questioning or pillorying those exposing the wrongdoings core to the argument rather the actual wrongdoings
  • A chorus of the same material from many accounts

Conclusion

We need to be aware of astroturfing as another form of disinformation that is prevalent in the online age. Here it can take in people who are naive and accept information at face value without doing further research on what is being pushed.

The US now takes serious action about electoral disinformation

Article

Now Uncle Sam is taking action on voter suppression

US arrests far-right Twitter troll for 2016 election interference | Engadget

From the horse’s mouth

United States Department Of Justice

Social Media Influencer Charged with Election Interference Stemming from Voter Disinformation Campaign (Press Release)

My Comments

Previously, when I have talked about activities that social media companies have undertaken regarding misinformation during election cycles, including misinformation to suppress voter participation, I have covered what these companies in the private sector are doing.

But I have also wanted to see a healthy dialogue between the social-media private sector and public-sector agencies responsible for the security and integrity of the elections. This is whether they are an election-oversight authority like the  FEC in the USA or the AEC in Australia; a broadcast oversight authority like the FCC in the USA or OFCOM in the UK; or a consumer-rights authority like the FTC in the USA or the ACCC in Australia. Here, these authorities need to be able to know where the proper communication of electoral information is at risk so they can take appropriate education and enforcement action regarding anything that distorts the election’s outcome.

Just lately, the US government arrested a Twitter troll who had been running information on his Twitter feed to dissuade Americans from participating properly and making their vote count in the 2016 Presidential Election. Here, the troll was suggesting that they don’t attend the local polling booths but cast their vote using SMS or social media, which isn’t considered a proper means of casting your vote in the USA. Twitter had banned him and a number of alt-right figureheads that year for harrassment.

These charges are based on a little-known US statute that proscribes activity that denies or dissuades a US citizen’s right to exercise their rights under that country’s Constitution. That includes the right to cast a legitimate vote at an election.

But this criminal case could be seen as a means to create a “conduit” between social media platforms and the public sector to use the full extent of the law to clamp down on disinformation and voter suppression using the Web. I also see it as a chance for public prosecutors to examine the laws of the land and use them as a tool to work against the fake news and disinformation scourge.

This is a criminal matter before the courts of law in the USA and the defendent is presumed innocent unless he is found guilty in a court of law.

Qualcomm to authenticate photos taken on your phone

Article

Android main interactive lock screen

Qualcomm will work towards authenticating photos taken by smartphones or other devices using its ARM silicon at the point of capture

One of the strongest ways to fight misinformation will soon be right in your phone | FastCompany

My Comments

The rise of deepfaked and doctored imagery surfacing on the Web and being used to corroborate lies has started an arms race to  verify the authenticity of audio and visual asset

It was encouraged by the Trusted News Initiative which is a group of leading newsrooms who want to set standards regarding the authenticity of news imagery and introduce watermarks for this purpose.

TruePic, an image authentication service, are partnering with Qualcomm to develop hardware-based authentication of images as they are being taken. Qualcomm has become the first manufacturer of choice because of themselves being involved with ARM-based silicon for most Android smartphones and the Windows 10 ARM platform.

This will use actual time and date, data gained from various device sensors and the image itself as it is taken to attach a certificate of authenticity to that image or video footage. This will be used to guarantee the authenticity of the photos or vision before they leave the user’s phone.

TruePic primarily implements this technology in industries like banking, insurance, warranty provision and law enforcement to work against fraudulent images being used to corroborate claims or to where imagery has to be of high forensic standards. But at the moment, Truepic implements this technology as an additional app that users have to install.

The partnership with Qualcomm is to integrate the functionality in to the smartphone’s camera firmware so that the software becomes more tamper-evident and this kind of authentication applies to all images captured by that sensor at the user’s discretion.

The fact that TruePic is partnering with Qualcomm at the moment is because most of the amateur photos are being taken with smartphones which use this kind of silicon. Once they have worked with Qualcomm, other camera chipmakers including Apple would need to collaborate with them to build in authenticated image technology in to their camera technology.

It can then appeal to implementation within standalone camera devices like traditional digital cameras, videosurveillance equipment, dashcams and the like. For example, it can be easier to verify media footage shot on pro gear as being authentic or to have videosurveillance footage being offered as evidence verified as being forensically accurate. But in these cases, there may be calls for the devices to be able to have access to highly-accurate time and location references for this to work.

The watermark generated by this technology will be intended to be machine-readable and packaged with the image file. This will make it easier for software to show whether the image is authentic or not and such software could be part of the Trusted News Initiative to authenticate amateur, stringer or other imagery or footage that comes in to a newsroom’s workflow. Or it could be used by eBay, Facebook or Tinder to indicate whether images or vision are a genuine representation of the goods for sale or the p

But this technology needs to also apply to images captured by dedicated digital cameras like this Canon PowerShot G1 X

rofile holder.

The idea of providing this function would be to offer it as an opt-in manner, typically as a shooting “mode” within a camera application. This allows the photographer to preserve their privacy. But the use of authenticated photos won’t allow users to digitally adjust their original photos to make them look better. This same situation may also apply to the use of digital zoom which effectively crops photos and videos at the time they are taken.

There is the idea of implementing distributed-ledger technology to track edits made to a photo. This can be used to track at what point the photo was edited and what kind of editing work took place. This kind of ledger technology could also apply to copies of that photo, which will be of importance where people save a copy of the image when they save any edits. This will also apply where a derivative work is created from the source file like a still image or a short clip is obtained from a longer file of existing footage.

A question that will then come about is how the time of day is recorded in these certificates, including the currently-effective time zone and whether the time is obtained from a highly-accurate reference. Such factors may put in to doubt the forensic accuracy of these certificates as far as when the photo or footage was actually taken.

For most of us, it could come in to its own when combatting deepfake and doctored images used to destabilise society. Those of us who use online dating or social-network platforms may use this to verify the authenticity of a person who is on that platform, thus working against catfishing. Similarly, the use of image authentication at the point of capture may come in to its own when we supply images or video to the media or to corroborate transactions.

Gizmodo examines the weaponisation of a Twitter hashtag

Article

How The #DanLiedPeopleDied Hashtag Reveals Australia’s ‘Information Disorder’ Problem | Gizmodo

My Comments

I read in Gizmodo how an incendiary hashtag directed against Daniel Andrews, the State Premier of Victoria in Australia, was pushed around the Twittersphere and am raising this as an article. It is part of keeping HomeNetworking01.info readers aware about disinformation tactics as we increasingly rely on the Social Web for our news.

What is a hashtag

A hashtag is a single keyword preceded by a hash ( # ) symbol that is used to identify posts within the Social Web that feature a concept. It was initially introduced in Twitter as a way of indexing posts created on that platform and make them easy to search by concept. But an increasing number of other social-Web platforms have enabled the use of hashtags for the same purpose. They are typically used to embody a slogan or idea in an easy-to-remember way across the social Web.

Most social-media platforms turn these hashtags in to a hyperlink that shows a filtered view of all posts featuring that hashtag. They even use statistical calculations to identify the most popular hashtags on that platform or the ones whose visibility is increasing and present this in meaningful ways like ranked lists or keyword clouds.

How this came about

Earlier on in the COVID-19 coronavirus pandemic, an earlier hashtag called #ChinaLiedPeopleDied was working the Social Web. This was underscoring a concept with a very little modicum of truth that the Chinese government didn’t come clear about the genesis of the COVID-19 plague with its worldwide death toll and their role in informing the world about it.

That hashtag was used to fuel Sinophobia hatred against the Chinese community and was one of the first symptoms of questionable information floating around the Social Web regarding COVID-19 issues.

Australia passed through the early months of the COVID-19 plague and one of their border-control measures for this disease was to have incoming travellers required to stay in particular hotels for a fortnight before they can roam around Australia as a quarantine measure. The Australian federal government put this program in the hands of the state governments but offered resources like the use of the military to these governments as part of its implementation.

The second wave of the COVID-19 virus was happening within Victoria and a significant number of the cases was to do with some of the hotels associated with the hotel quarantine program. This caused a very significant death toll and had the state government run it to a raft of very stringent lockdown measures.

A new hashtag called #DanLiedPeopleDied came about because it was deemed that the Premier, Daniel Andrews, as the head of the state’s executive government wasn’t perceived to have come clear about any and all bungles associated with its management of the hotel quarantine program.

On 14 July 2020, this hashtag first appeared in a Twitter account that initially touched on Egyptian politics and delivered its posts in the Arabic language. But it suddenly switched countries, languages and political topics, which is one of the symptoms of a Social Web account existing just to peddle disinformation and propaganda.

The hashtag had laid low until 12 August when a run of Twitter posts featuring it were delivered by hyper-partisan Twitter accounts. This effort, also underscored by newly-created or suspicious accounts that existed to bolster the messaging, was to make it register on Twitter’s systems as a “trending” hashtag.

Subsequently a far-right social-media influencer with a following of 116,000 Twitter accounts ran a post to keep the hashtag going. There was a lot of very low-quality traffic featuring that hashtag or its messaging. It also included a lot of low-effort memes being published to drive the hashtag.

The above-mentioned Gizmodo article has graphs to show how the hashtag appeared over time which is worth having a look at.

What were the main drivers

But a lot of the traffic highlighted in the article was driven by the use of new or inauthentic accounts which aren’t necessarily “bots” – machine operated accounts that provide programmatic responses or posts. Rather this is the handiwork of trolls or sockpuppets (multiple online personas that are perceived to be different but say the same thing).

As well, there was a significant amount of “gaming the algorithm” activity going on in order to raise the profile of that hashtag. This is due to most social-media services implementing algorithms to expose trending activity and populate the user’s main view.

Why this is happening

Like with other fake-news, disinformation and propaganda campaigns, the #DanLiedPeopleDied hashtag is an effort to sow seeds of fear, uncertainty and doubt while bringing about discord with information that has very little in the way of truth. As well the main goal is to cause a popular distrust in leadership figures and entities as well as their advice and efforts.

In this case, the campaign was targeted at us Victorians who were facing social and economic instability associated with the recent stay-at-home orders thanks to COVID-19’s intense reappearance, in order to have us distrust Premier Dan Andrews and the State Government even more. As such, it is an effort to run these kind of campaigns to people who are in a state of vulnerability, when they are less likely to use defences like critical thought to protect themselves against questionable information.

As I know, Australia is rated as one of the most sustainable countries in the world by the Fragile States Index, in the same league as the Nordic countries, Switzerland, Canada and New Zealand. It means that the country is known to be socially, politically and economically stable. But we can find that a targeted information-weaponisation campaign can be used to destabilise a country even further and we need to be sensitive to such tactics.

One of the key factors behind the problem of information weaponisation is the weakening of traditional media’s role in the dissemination of hard news. This includes younger people preferring to go to online resources, especially the Social Web, portals or news aggregator Websites for their daily news intake. It also includes many established newsrooms receiving reduced funding thanks to reduced advertising, subscription or government income, reducing their ability to pay staff to turn out good-quality news.

When we make use of social media, we need to develop a healthy suspicion regarding what is appearing. Beware of accounts that suddenly appear or develop chameleon behaviours especially when key political events occur around the world. Also be careful about accounts that “spam” their output with a controversial hashtag or adopt a “stuck record” mentality over a topic.

Conclusion

Any time where a jurisdiction is in a state of turmoil is where the Web, especially the Social Web, can be a tool of information warfare. When you use it, you need to be on your guard about what you share or which posts you interact with.

Here, do research on hashtags that are suddenly trending around a social-media platform and play on your emotions and be especially careful of new or inauthentic accounts that run these hashtags.

Study confirms content-recommendation engines can further personal biases

YouTube recommendation list

Content recommendation engines on the likes of YouTube can easily lead viewers down a content rabbit hole if they are not careful

Article

This website lets you see how conspiracy theorists fall down the YouTube rabbit hole | Mashable

My Comments

Increasingly a lot of online services, be they social media services, news-aggregation portals, video streaming services and the like, are using algorithms to facilitate the exposure of undiscovered content to their users. It is part of their vision to effectively create a customised user experience for each person who uses these services and is part of an Internet-driven concept of “mass customisation”.

Those of you who use Netflix may find that your newsletter that they send you has movie recommendations that are based on what you are watching. You will also see on the Netflix screen a “recommendations” playlist with movies that are similar to what you have been watching through that service.

A very common example of this is YouTube with its recommended-content lists such as what to view next or what channels to subscribe to. Here a lot of the content that is viewed on YouTube is the result of viewers using the service’s personalised content recommendations.

The issue being raised regarding these algorithms is how they can perpetuate a personal “thought bubble”. It is even though there is other material available on the online service that may not mesh with that “bubble”. Typically this is through surfacing content that amplifies what the viewer has seen previously and can pander to their own biases.

An online experiment created by a Web developer and funded by the Mozilla Foundation explores this concept further in context with YouTube. This experiment, called “TheirTube”, emulates the YouTube content-discovery and viewing habits of six different personalities like conspiracists, conservative thinkers and climate deniers when they view content related to their chosen subjects.

Here, it shows up what is recommended in relationship to content to view next or channels to subscribe to for these different personalities and shows how the content recommendation engine can be used to underscore or amplify particular viewpoints.

It is a common problem associated with the artificial-intelligence / machine-learning approach associated with content recommendation that these services use. This is due to the end-user “seeding” the algorithms with the content that they actually interact with or the logical content sources they actually follow. Here, the attributes associated with the content effectively determine the “rules” the algorithm works on.

If you are trying to maintain some sort of critical thinking and use content services like YouTube for informational content, you may have to rely less on the content-recommendation engine that they use for finding new content. You may find it useful to manually seek out content with a contrasting viewpoint to avoid the creation of a “thought bubble”.

As well, if you follow the online-service’s recommendations in addition to running contrasting content through the online service, you may be in a position to make the content recommendation engine bring up varied content.

The idea of content recommendation engines that are based on what you choose can allow us to be easily cocooned in a content bubble that perpetuates personal biases.

WhatsApp to allow users to search the Web regarding content in their messages

WhatsApp Search The Web infographic courtesy of WhatsApp

WhatsApp to allow you to search the Web for text related to viral messages posted on that instant messaging app

Article

WhatsApp Pilots ‘Search the Web’ Tool for Fact-Checking Forwarded Messages | Gizmodo Australia

From the horse’s mouth

WhatsApp

Search The Web (blog post)

My Comments

WhatsApp is taking action to highlight the fact that fake news and disinformation don’t just get passed through the Social Web. Here, they are highlighting the use of instant messaging and, to some extent, email as a vector for this kind of traffic which has been as old as the World Wide Web.

They have improved on their previous efforts regarding this kind of traffic initially by using a “double-arrow” icon on the left of messages that have been forwarded five or more times.

But now they are trialling an option to allow users to Google the contents of a forwarded message to check their veracity. One of the ways to check a news item’s veracity is whether one or more news publishers or broadcasters that you trust are covering this story and what kind of light they are shining on it.

Here, the function manifests as a magnifying-glass icon that conditionally appears near forwarded messages. If you click or tap on this icon, you start a browser session that shows the results of a pre-constructed Google-search Weblink created by WhatsApp. It avoids the need to copy then paste the contents of a forwarded message from WhatsApp to your favourite browser running your favourite search engine or to the Google app’s search box. This is something that can be very difficult with mobile devices.

But does this function break end-to-end encryption that WhatsApp implements for the conversations? No, because it works on the cleartext that you see on your screen and is simply creating the specially-crafted Google-search Weblink that is passed to whatever software handles Weblinks by default.

An initial pilot run is being made available in Italy, Brazil, Ireland (Eire), UK, Mexico, Spain and the USA. It will be part of the iOS and Android native clients and the messaging service’s Web client.

WhatsApp could evolve this function further by allowing the user to use different search engines like Bing or DuckDuckGo. But they would have to know of any platform-specific syntax requirements for each of these platforms and it may be a feature that would have to be rolled out in a piecemeal fashion.

They could offer the “search the Web” function as something that can be done for any message, rather than only for forwarded messages. I see it as being relevant for people who use the group-chatting functionality that WhatsApp offers because people can use a group chat as a place to post that rant that has a link to a Web resource of question. Or you may have a relative or friend who simply posts questionable information as part of their conversation with you.

At least WhatsApp are adding features to their chat platform’s client software to make it easer to put the brakes on disinformation spreading through it. This could he something that could be investigated by other instant-messaging platforms including SMS/MMS text clients.

A digital watermark to identify the authenticity of news photos

Articles

ABC News 24 coronavirus coverage

The news services that appear on the “screen of respect” that is main TV screen like the ABC are often seen as being “of respect” and all the screen text is part of their identity

TNI steps up fight against disinformation  | Advanced Television

News outlets will digitally watermark content to limit misinformation | Engadget

News Organizations Will Start Using Digital Watermarks To Combat Fake News |Ubergizmo

My Comments

The Trusted New Initiative are a recently formed group of global news and tech organisations, mostly household names in these fields, who are working together to stop the spread of disinformation where it poses a risk of real-world harm. It also includes flagging misinformation that undermines trust the the TNI’s partner news providers like the BBC. Here, the online platforms can review the content that comes in, perhaps red-flagging questionable content, and newsrooms avoid blindly republishing it.

ABC News website

.. as well as their online presence – they will benefit from having their imagery authenticated by a TNI watermark

One of their efforts is to agree on and establish an early-warning system to combat the spread of fake news and disinformation. It is being established in the months leading up to the polling day for the US Presidential Election 2020 and is flagging disinformation were there is an immediate threat to life or election integrity.

It is based on efforts to tackle disinformation associated with the 2019 UK general election, the Taiwan 2020 general election, and the COVID-19 coronavirus plague.

Another tactic is Project Origin, which this article is primarily about.

An issue often associated with fake news and disinformation is the use of imagery and graphics to make the news look credible and from a trusted source.

Typically this involves altered or synthesised images and vision that is overlaid with the logos and other trade dress associated with BBC, CNN or another newsroom of respect. This conveys to people who view this online or on TV that the news is for real and is from a respected source.

Project Origin is about creating a watermark for imagery and vision that comes from a particular authentic content creator. This will degrade whenever the content is manipulated. It will be based around open standards overseen by TNI that relate to authenticating visual content thus avoiding the need to reinvent the wheel when it comes to developing any software for this to work.

One question I would have is whether it is only readable by computer equipment or if there is a human-visible element like the so-called logo “bug” that appears in the corner of video content you see on TV. If this is machine-readable only, will there be the ability for a news publisher or broadcaster to overlay a graphic or message that states the authenticity at the point of publication. Similarly, would a Web browser or native client for an online service have extra logic to indicate the authenticity of an image or video footage.

I would also like to see the ability to indicate the date of the actual image or footage being part of the watermark. This is because some fake news tends to be corroborated with older lookalike imagery like crowd footage from a similar but prior event to convince the viewer. Some of us may also look at the idea of embedding the actual or approximate location of the image or footage in the watermark.

There is also the issue of newsrooms importing images and footage from other sources whose equipment they don’t have control over. For example, an increasing amount of amateur and videosurveillance imagery is used in the news usually because the amateur photographer or the videosurveillance setup has the “first images” of the news event. Then there is reliance on stock-image libraries and image archives for extra or historical footage; along with newsrooms and news / PR agencies sharing imagery with each other. Let’s not forget media companies who engage “stringers” (freelance photographers and videographers) who supply images and vision taken with their own equipment.

The question with all this, especially with amateur / videosurveillance / stringer footage taken with equipment that media organisations don’t have control over is how such imagery can be authenticated by a newsroom. This is more so where the image just came off a source like someone’s smartphone or the DVR equipment within a premises’ security room. There is also the factor that one source could tender the same imagery to multiple media outlets, whether through a media-relations team or simply offering it around.

At least Project Origin will be useful as a method to allow the audience to know the authenticity and provenance of imagery that is purported to corroborate a newsworthy event.