YouTube has recently been taking action regarding the disemmination of disinformation via its video-sharing platform and this has affected its distribution across other parts of the Social Web according to results published from a recent study.
The Engadget article shows how the report from the study analyses what happened when certain YouTube policies came in to play and their effect on disinformation regarding the 2020 US Presidential Election and its results. All dates referenced to in this article are local to North America.
On December 9, YouTube enacted a policy to remove videos that claim existence of widespread errors and fraud that led to the change in direction for the US Presidential Election’s results. This yielded a siignificant drop in shares of this kind of disinformation via Twitter.
.. especially on the likes of Facebook and Twitter
Similarly, after the January 6 Capitol Hill Insurrection, YouTube handed out “strikes” against online-content channels on its platform for sharing misinformation about the election results. That means that the channel cannot upload new content for a period of time or, if it has accrued three strikes in 90 days, it is removed from YouTube. It is very similar to the penalty points or demerit points that is part of most jurisdictions’ traffic-law systems.
This policy also yielded a similarly sharp drop in sharing of such content across the Twitter platform. A similar pattern was also noticed during Inauguration Day when President Joe Biden was sworn in to office.
This same report observes similar metrics across the Facebook platforms using the analytics tools that Facebook provides like CrowdTangle.
Why is this so?
YouTube is seen as the “go-to” online platform for hosting and viewing user-generated video content. This is due to it being free of charge for hosting or casual viewing of such content, while just about all mobile and set-top / connected-TV platforms either have platform-native apps or inherent support for this platform.
As well, it is relatively easy to share YouTube-hosted video content on other online platforms thanks to logic that simplifies this process for the common social-media sites like Facebook or Twitter. Similarly if a video is posted to YouTube in a public context, Google makes it easier to have it appear on its well-known search engine where its brand ends up as a generic trademark for searching for online information using a search engine.
If Google is maintaining robust content control on the YouTube platform, there is a reduced quantity of shareable fake news and disinformation there. As well, most people who use YouTube for video content are more likely to use the Facebook Group’s platforms or Twitter for social-media purposes.
As well, most bloggers and small-time Website operators are likely to share or embed video resources that are hosted on YouTube or Vimeo if they want to cite a video resource. These kind of links also expose the content on the Google, Bing and other search engines due to them identifying the number of sites linking to an online resource and “bumping” them up higher in the search results.
What to be aware of
An issue to be aware of in the context of fake news and disinformation is the rise of alternative platforms. These would include alternative video-hosting platforms like BitChute or PeerTube; alternative social networks like Parler or Gab; or “communications-first” platforms like Signal and Telegram with their “virtual-contact” / “broadcast-channel” functionality.
Then users who follow this kind of disinformation will see the popular social media and video sharing platforms in the same light as established media outlets. Here, they will be condescending of such platforms such as referring them to as “television”. This is more due to this medium being able to allow established broadcasters to easily maintain particular content standards.
What has been shown here is that if at least one popularly-used content platform demonstrates robust content management against disinformation, it has a ripple effect across the rest of the Social Web.
Telstra has joined the large number of private-sector actors including its airline equivalent Qantas in running campaigns for us to get ourselves vaccinated against the COVID-119 coronavirus plague.
Here, Telstra is exploiting its position as a mobile telephony carrier to tackle the 5G mobile-broadband myths that are often run in the same breath as anti-vax myths. This is due to the fact that anti-vaccination conspiracy theorists run other themes like the harm caused by 5G mobile broadband and other non-ionisating radiation sources.
Mark Humphries who is the voice of this campaign, uses a comic line to underscore the way these conspiracy theorists approach you to spill all their nonsense. He even uses the same humour to play on these remarks so as to have you sort the truth out from the nonsense that they tell you.
Even that line “do your own research” that they quote is turned around to mean to do research from proper knowledgeable sources who are qualified to talk about these things.
But I like the fact that he comes at the vaccination issue from the same kind of approach used by those who peddle this disinformation which can often include people within our social circle.
Here, the proper information is that these COVID vaccines like both doses of the AstraZeneca vaccine which I had are delivered purely as a liquid to be injected using garden-variety medical-use syringes and needles. They have been tested for safety and efficacy before being approved and have nothing to do with 5G (or other non-ionising) radiation, microchips or magnetism.
Don’t fall for the nonsense! Get those proper vaccination jabs and stay safe!
An issue that is raised in the context of fake news and disinformation is a campaign tactic known as “astroturfing”. This is something that our online life has facilitated thanks to easy-to-produce Websites on affordable Web-hosting deals along with the Social Web.
I am writing about this on HomeNetworking01.info due to astroturfing as another form of disinformation that we are needing to be careful of in this online era.
What is astroturfing?
Astroturfing is organised propaganda activity intended to create a belief of popular grassroots support for a viewpoint in relationship to a cause or policy. This activity is organised by one or more large organisations with it typically appearing as the output of concerned individuals or smaller community organisations such as a peak body for small businesses of a kind.
But there is no transparency about who is actually behind the message or the benign-sounding organisations advancing that message. Nor is there any transparency about the money flow associated with the campaign.
organized activity that is intended to create a false impression of a widespread, spontaneously arising, grassroots movement in support of or in opposition to something (such as a political policy) but that is in reality initiated and controlled by a concealed group or organization (such as a corporation).
The etymology for this word comes about as a play on words in relation to the “grassroots” expression. It alludes to the Astroturf synthetic turf implemented initially in the Astrodome stadium in Houston in the USA., with the “Astroturf” trademark becoming a generic trademark for synthetic sportsground turf sold in North America.
This was mainly practised by Big Tobacco to oppose significant taxation and regulation measures against tobacco smoking, but continues to be practised by entities whose interests are against the public good.
How does astroturfing manifest?
It typically manifests as one or more benign-sounding community organisations that appear to demonstrate popular support for or against a particular policy. It typically affects policies for the social or environmental good where there is significant corporate or other “big-money” opposition to these policies.
The Internet era has made this more feasible thanks to the ability to create and host Websites for cheap. As online forums and social media came on board, it became feasible to set up multiple personas and organisational identities on forums and social-media platforms to make it appear as though many people or organisations are demonstrating popular support for the argument. It is also feasible to interlink Websites and online forums or Social-Web presences by posting a link from a Website or blog in a forum or Social-Web post or having articles on a Social Web account appear on one’s Website.
The multiple online personas created by one entity for this purpose of demonstrating the appearance of popular support are described as “sockpuppet” accounts. This is in reference to children’s puppet shows where two or three puppet actors use glove puppets made out of odd socks and can manipulate twice the number of characters as each actor. This can happen synchronously with a particular event that is in play, be it the effective date of an industry reform or set of restrictions; a court case or inquiry taking place; or a legislature working on an important law.
An example of this that occurred during the long COVID-19 lockdown that affected Victoria last year where the “DanLiedPeopleDied” and “DictatorDan” hashtags were manipulated on Twitter to create a sentiment of popular distrust against Dan Andrews. Here it was identified that a significant number of the Twitter accounts that drove these hashtags surfaced or changed their behaviour synchronously to the lockdown’s effective period.
But astroturfing can manifest in to offline / in-real-life activities like rallies and demonstrations; appearances on talkback radio; letters to newspaper editors, pamphlet drops and traditional advertising techniques.
Let’s not forget that old-fashioned word-of-mouth advertising for an astroturfing campaign can take place here like over the neighbour’s fence, at the supermarket checkout or around the office’s water cooler.
Sometimes the online activity is used to rally for support for one or more offline activities or to increase the amount of word-of-mouth conversation on the topic. Or the pamphlets and outdoor advertising will carry references to the campaign’s online resources so people can find out more “from the horse’s mouth”. This kind of material used for offline promotion can be easily and cheaply produced using “download-to-print” resources, print and copy shops that use cost-effective digital press technology, firms who screen-print T-shirts on demand from digital originals amongst other online-facilitated technologies.
An example of this highlighted by Spectrum News 1 San Antonio in the USA was the protest activity against COVID-19 stay-at-home orders in that country. This was alluding to Donald Trump and others steering public opinion away from a COVID-safe USA.
This method of deceit capitalises on popular trust in the platform and the apparently-benign group behind the message or appearance of popular support for that group or its message. As well, astroturfing is used to weaken any true grassroots support for or against the opinion.
How does astroturfing affect media coverage and academic discussion of an issue?
The easily-plausible arguments tendered by a benign-sounding organisation can encourage journalists to “go with the flow” regarding the organisation’s ideas. It can include treating the organisation’s arguments at face value for a supporting or opposing view on the topic at hand especially where they want to create a balanced piece of material.
This risk is significantly increased in media environments where there isn’t a culture of critical thinking with obvious examples being partisan or tabloid media. Examples of this could be breakfast/morning TV talk shows on private free-to-air TV networks or talkback radio on private radio stations.
But there is a greater risk of this occurring while there is increasingly-reduced investment in public-service and private news media. Here, the fear of newsrooms being reduced or shut down or journalists not being paid much for their output can reduce the standard of journalism and the ability to perform proper due diligence on news sources.
There is also the risk of an astroturfing campaign affecting academic reportage of the issue. This is more so where the student doesn’t have good critical-thinking and research skills and can be easily swayed by spin. It is more so with secondary education or some tertiary education situations like vocational courses or people at an early stage in the undergraduate studies.
How does astroturfing affect healthy democracies
All pillars of government can and do fall victim to astroturfing. This can happen at all levels of government ranging from local councils through state or regional governments to the national governments.
During an election, an astroturfing campaign can be used to steer opinion for or against a political party or candidate who is standing for election. In the case of a referendum, it can steer popular opinion towards or against the questions that are the subject of the referendum. This is done in a manner to convey the veneer of popular grassroots support for or against the candidate, party or issue.
The legislature is often a hotbed of political lobbying by interest groups and astroturfing can be used to create a veneer of popular support for or against legislation or regulation of concern to the interest group. As well, astroturfing can be used a a tool to place pressure on legislature members to advance or stall a proposed law and, in some cases, force a government out of power where there is a stalemate over that law.
The public-service agencies of the executive government who have the power to permit or veto activity are also victims of astroturfing. This comes in the form of whether a project can go ahead or not; or whether a product is licensed for sale within the jurisdiction. It can also affect the popular trust in any measures that officials in the executive government execute.
As well, the judiciary can be tasked with handling legal actions launched by pressure groups who use astroturfing to create a sense of popular support to revise legislation or regulation. It also includes how jurors are influenced in any jury trial or which judges are empanelled in a court of law, especially a powerful appellate court or the jurisdiction’s court of last resort.
Politicians, significant officials and key members of the judiciary can fall victim to character assassination campaigns that are part of one or more astroturfing campaigns. This can affect continual popular trust in these individuals and can even affect the ability for them to live or conduct their public business in safety.
Here, politicians and other significant government officials are increasingly becoming accessible to the populace. It is being facilitated by themselves maintaining Social-Web presence using a public-facing persona on the popular social-media platforms, with the same account-name or “handle” being used on the multiple platforms. In the same context, the various offices and departments maintain their social-Web presence on the popular platforms using office-wide accounts. This is in addition to other online presences like the ministerial Web pages or public-facing email addresses they or the government maintain.
These officials can be approached by interest groups who post to the official’s Social-Web presence. Or a reference can be created to the various officials and government entities through the use of hashtags or mentions of platform-native account names operated by these entities when someone creates a Social Web post about the official or issue at hand. In a lot of cases, there is reference to sympathetic journalists and media organisations in order to create media interest.
As well, one post with the right message and the right mix of hashtags and referenced account names can be viewed by the targeted decision makers and the populace at the same time. Then people who are sympathetic to that post’s message end up reposting that message, giving it more “heat”.
Here, the Social Web is seen as providing unregulated access to these powerful decision-makers. That is although the decision-makers work with personal assistants or similar staff to vet content that they see. As well, there isn’t any transparency about who is posting the content that references these officials i.e. you don’t know whether it is a local constituent or someone pressured by an interest group.
What can be done about it
The huge question here is what can be done about astroturfing as a means of disinformation.
A significant number of jurisdictions implement attribution requirements for any advertising or similar material as part of their fair-trading, election-oversight, broadcasting, unsolicited-advertising or similar laws. Similarly a significant number of jurisdictions implement lobbyist regulation in relationship to who has access to the jurisdiction’s politicians. As outlined in the RNZ article that I referred to, New Zealand is examining astroturfing in the context of whether they should regulate access to their politicians.
But most of these laws regulate what goes on within the offline space within the jurisdiction that they pertain to. It could become feasible for foreign actors to engage in astroturfing and similar campaigns from other territories across the globe using online means without any action being taken.
The issue of regulating lobbyist access to the jurisdiction’s politicians or significant officials can raise questions. Here it could be about whether the jurisdiction’s citizens have a continual right of access to their elected government or not. As well, there is the issue of assuring governmental transparency and a healthy dialogue with the citizens.
The well-oiled government and business machines have employees like secretaries and assistants who have the function of being gatekeepers for the officials that matter. Even significant and influential public figures engage these “gatekeeper” teams to protect their personality “brand” from being misused.
These “gatekeeper” teams also includes staff dedicated to monitoring traditional and social media coverage on the issues at hand. These employees, if they are astute enough, can alert the officials to disinformation campaigns or prevent these officials from being swayed by them.
The 2016 fake-news crisis which highlighted the distortion of the 2016 US Presidential Election and UK Brexit referendum became a wake-up call regarding how the online space can be managed to work against disinformation.
Here, Silicon Valley took on the task of managing online search engines, social-media platforms and online advertising networks to regulate foreign influence and assure accountability when it comes to political messaging in the online space. This included identity verification of advertiser accounts or keeping detailed historical records of ads from political advertisers on ad networks or social media or clamping down on coordinated inauthentic behaviour on social media platforms.
In addition to this, an increasingly-large army of “fact-checkers” organised by credible newsrooms, universities and similar organisations appeared. These groups researched and verified claims which were being published through the media or on online platforms and would state whether they are true or false based on their research.
What we can do is research further and trust our instincts when it comes to questionable claims that come from apparently-benign organisations. Here we can do our due diligence and check for things like how long an online account has been in operation for, especially if it is synchronous to particular political, regulatory or similar events occurring or being on the horizon.
Here you have to look out for behaviours in the online or offline content like:
Inflammatory or manipulative language that plays on your emotions
Claims to debunk topic-related myths that aren’t really myths
Questioning or pillorying those exposing the wrongdoings core to the argument rather the actual wrongdoings
A chorus of the same material from many accounts
We need to be aware of astroturfing as another form of disinformation that is prevalent in the online age. Here it can take in people who are naive and accept information at face value without doing further research on what is being pushed.
Previously, when I have talked about activities that social media companies have undertaken regarding misinformation during election cycles, including misinformation to suppress voter participation, I have covered what these companies in the private sector are doing.
But I have also wanted to see a healthy dialogue between the social-media private sector and public-sector agencies responsible for the security and integrity of the elections. This is whether they are an election-oversight authority like the FEC in the USA or the AEC in Australia; a broadcast oversight authority like the FCC in the USA or OFCOM in the UK; or a consumer-rights authority like the FTC in the USA or the ACCC in Australia. Here, these authorities need to be able to know where the proper communication of electoral information is at risk so they can take appropriate education and enforcement action regarding anything that distorts the election’s outcome.
Just lately, the US government arrested a Twitter troll who had been running information on his Twitter feed to dissuade Americans from participating properly and making their vote count in the 2016 Presidential Election. Here, the troll was suggesting that they don’t attend the local polling booths but cast their vote using SMS or social media, which isn’t considered a proper means of casting your vote in the USA. Twitter had banned him and a number of alt-right figureheads that year for harrassment.
These charges are based on a little-known US statute that proscribes activity that denies or dissuades a US citizen’s right to exercise their rights under that country’s Constitution. That includes the right to cast a legitimate vote at an election.
But this criminal case could be seen as a means to create a “conduit” between social media platforms and the public sector to use the full extent of the law to clamp down on disinformation and voter suppression using the Web. I also see it as a chance for public prosecutors to examine the laws of the land and use them as a tool to work against the fake news and disinformation scourge.
This is a criminal matter before the courts of law in the USA and the defendent is presumed innocent unless he is found guilty in a court of law.
I read in Gizmodo how an incendiary hashtag directed against Daniel Andrews, the State Premier of Victoria in Australia, was pushed around the Twittersphere and am raising this as an article. It is part of keeping HomeNetworking01.info readers aware about disinformation tactics as we increasingly rely on the Social Web for our news.
What is a hashtag
A hashtag is a single keyword preceded by a hash ( # ) symbol that is used to identify posts within the Social Web that feature a concept. It was initially introduced in Twitter as a way of indexing posts created on that platform and make them easy to search by concept. But an increasing number of other social-Web platforms have enabled the use of hashtags for the same purpose. They are typically used to embody a slogan or idea in an easy-to-remember way across the social Web.
Most social-media platforms turn these hashtags in to a hyperlink that shows a filtered view of all posts featuring that hashtag. They even use statistical calculations to identify the most popular hashtags on that platform or the ones whose visibility is increasing and present this in meaningful ways like ranked lists or keyword clouds.
How this came about
Earlier on in the COVID-19 coronavirus pandemic, an earlier hashtag called #ChinaLiedPeopleDied was working the Social Web. This was underscoring a concept with a very little modicum of truth that the Chinese government didn’t come clear about the genesis of the COVID-19 plague with its worldwide death toll and their role in informing the world about it.
That hashtag was used to fuel Sinophobia hatred against the Chinese community and was one of the first symptoms of questionable information floating around the Social Web regarding COVID-19 issues.
Australia passed through the early months of the COVID-19 plague and one of their border-control measures for this disease was to have incoming travellers required to stay in particular hotels for a fortnight before they can roam around Australia as a quarantine measure. The Australian federal government put this program in the hands of the state governments but offered resources like the use of the military to these governments as part of its implementation.
The second wave of the COVID-19 virus was happening within Victoria and a significant number of the cases was to do with some of the hotels associated with the hotel quarantine program. This caused a very significant death toll and had the state government run it to a raft of very stringent lockdown measures.
A new hashtag called #DanLiedPeopleDied came about because it was deemed that the Premier, Daniel Andrews, as the head of the state’s executive government wasn’t perceived to have come clear about any and all bungles associated with its management of the hotel quarantine program.
On 14 July 2020, this hashtag first appeared in a Twitter account that initially touched on Egyptian politics and delivered its posts in the Arabic language. But it suddenly switched countries, languages and political topics, which is one of the symptoms of a Social Web account existing just to peddle disinformation and propaganda.
The hashtag had laid low until 12 August when a run of Twitter posts featuring it were delivered by hyper-partisan Twitter accounts. This effort, also underscored by newly-created or suspicious accounts that existed to bolster the messaging, was to make it register on Twitter’s systems as a “trending” hashtag.
Subsequently a far-right social-media influencer with a following of 116,000 Twitter accounts ran a post to keep the hashtag going. There was a lot of very low-quality traffic featuring that hashtag or its messaging. It also included a lot of low-effort memes being published to drive the hashtag.
The above-mentioned Gizmodo article has graphs to show how the hashtag appeared over time which is worth having a look at.
What were the main drivers
But a lot of the traffic highlighted in the article was driven by the use of new or inauthentic accounts which aren’t necessarily “bots” – machine operated accounts that provide programmatic responses or posts. Rather this is the handiwork of trolls or sockpuppets (multiple online personas that are perceived to be different but say the same thing).
As well, there was a significant amount of “gaming the algorithm” activity going on in order to raise the profile of that hashtag. This is due to most social-media services implementing algorithms to expose trending activity and populate the user’s main view.
Why this is happening
Like with other fake-news, disinformation and propaganda campaigns, the #DanLiedPeopleDied hashtag is an effort to sow seeds of fear, uncertainty and doubt while bringing about discord with information that has very little in the way of truth. As well the main goal is to cause a popular distrust in leadership figures and entities as well as their advice and efforts.
In this case, the campaign was targeted at us Victorians who were facing social and economic instability associated with the recent stay-at-home orders thanks to COVID-19’s intense reappearance, in order to have us distrust Premier Dan Andrews and the State Government even more. As such, it is an effort to run these kind of campaigns to people who are in a state of vulnerability, when they are less likely to use defences like critical thought to protect themselves against questionable information.
As I know, Australia is rated as one of the most sustainable countries in the world by the Fragile States Index, in the same league as the Nordic countries, Switzerland, Canada and New Zealand. It means that the country is known to be socially, politically and economically stable. But we can find that a targeted information-weaponisation campaign can be used to destabilise a country even further and we need to be sensitive to such tactics.
One of the key factors behind the problem of information weaponisation is the weakening of traditional media’s role in the dissemination of hard news. This includes younger people preferring to go to online resources, especially the Social Web, portals or news aggregator Websites for their daily news intake. It also includes many established newsrooms receiving reduced funding thanks to reduced advertising, subscription or government income, reducing their ability to pay staff to turn out good-quality news.
When we make use of social media, we need to develop a healthy suspicion regarding what is appearing. Beware of accounts that suddenly appear or develop chameleon behaviours especially when key political events occur around the world. Also be careful about accounts that “spam” their output with a controversial hashtag or adopt a “stuck record” mentality over a topic.
Any time where a jurisdiction is in a state of turmoil is where the Web, especially the Social Web, can be a tool of information warfare. When you use it, you need to be on your guard about what you share or which posts you interact with.
Here, do research on hashtags that are suddenly trending around a social-media platform and play on your emotions and be especially careful of new or inauthentic accounts that run these hashtags.
They have improved on their previous efforts regarding this kind of traffic initially by using a “double-arrow” icon on the left of messages that have been forwarded five or more times.
But now they are trialling an option to allow users to Google the contents of a forwarded message to check their veracity. One of the ways to check a news item’s veracity is whether one or more news publishers or broadcasters that you trust are covering this story and what kind of light they are shining on it.
Here, the function manifests as a magnifying-glass icon that conditionally appears near forwarded messages. If you click or tap on this icon, you start a browser session that shows the results of a pre-constructed Google-search Weblink created by WhatsApp. It avoids the need to copy then paste the contents of a forwarded message from WhatsApp to your favourite browser running your favourite search engine or to the Google app’s search box. This is something that can be very difficult with mobile devices.
But does this function break end-to-end encryption that WhatsApp implements for the conversations? No, because it works on the cleartext that you see on your screen and is simply creating the specially-crafted Google-search Weblink that is passed to whatever software handles Weblinks by default.
An initial pilot run is being made available in Italy, Brazil, Ireland (Eire), UK, Mexico, Spain and the USA. It will be part of the iOS and Android native clients and the messaging service’s Web client.
WhatsApp could evolve this function further by allowing the user to use different search engines like Bing or DuckDuckGo. But they would have to know of any platform-specific syntax requirements for each of these platforms and it may be a feature that would have to be rolled out in a piecemeal fashion.
They could offer the “search the Web” function as something that can be done for any message, rather than only for forwarded messages. I see it as being relevant for people who use the group-chatting functionality that WhatsApp offers because people can use a group chat as a place to post that rant that has a link to a Web resource of question. Or you may have a relative or friend who simply posts questionable information as part of their conversation with you.
At least WhatsApp are adding features to their chat platform’s client software to make it easer to put the brakes on disinformation spreading through it. This could he something that could be investigated by other instant-messaging platforms including SMS/MMS text clients.
The Trusted New Initiative are a recently formed group of global news and tech organisations, mostly household names in these fields, who are working together to stop the spread of disinformation where it poses a risk of real-world harm. It also includes flagging misinformation that undermines trust the the TNI’s partner news providers like the BBC. Here, the online platforms can review the content that comes in, perhaps red-flagging questionable content, and newsrooms avoid blindly republishing it.
.. as well as their online presence – they will benefit from having their imagery authenticated by a TNI watermark
One of their efforts is to agree on and establish an early-warning system to combat the spread of fake news and disinformation. It is being established in the months leading up to the polling day for the US Presidential Election 2020 and is flagging disinformation were there is an immediate threat to life or election integrity.
It is based on efforts to tackle disinformation associated with the 2019 UK general election, the Taiwan 2020 general election, and the COVID-19 coronavirus plague.
Another tactic is Project Origin, which this article is primarily about.
An issue often associated with fake news and disinformation is the use of imagery and graphics to make the news look credible and from a trusted source.
Typically this involves altered or synthesised images and vision that is overlaid with the logos and other trade dress associated with BBC, CNN or another newsroom of respect. This conveys to people who view this online or on TV that the news is for real and is from a respected source.
Project Origin is about creating a watermark for imagery and vision that comes from a particular authentic content creator. This will degrade whenever the content is manipulated. It will be based around open standards overseen by TNI that relate to authenticating visual content thus avoiding the need to reinvent the wheel when it comes to developing any software for this to work.
One question I would have is whether it is only readable by computer equipment or if there is a human-visible element like the so-called logo “bug” that appears in the corner of video content you see on TV. If this is machine-readable only, will there be the ability for a news publisher or broadcaster to overlay a graphic or message that states the authenticity at the point of publication. Similarly, would a Web browser or native client for an online service have extra logic to indicate the authenticity of an image or video footage.
I would also like to see the ability to indicate the date of the actual image or footage being part of the watermark. This is because some fake news tends to be corroborated with older lookalike imagery like crowd footage from a similar but prior event to convince the viewer. Some of us may also look at the idea of embedding the actual or approximate location of the image or footage in the watermark.
There is also the issue of newsrooms importing images and footage from other sources whose equipment they don’t have control over. For example, an increasing amount of amateur and videosurveillance imagery is used in the news usually because the amateur photographer or the videosurveillance setup has the “first images” of the news event. Then there is reliance on stock-image libraries and image archives for extra or historical footage; along with newsrooms and news / PR agencies sharing imagery with each other. Let’s not forget media companies who engage “stringers” (freelance photographers and videographers) who supply images and vision taken with their own equipment.
The question with all this, especially with amateur / videosurveillance / stringer footage taken with equipment that media organisations don’t have control over is how such imagery can be authenticated by a newsroom. This is more so where the image just came off a source like someone’s smartphone or the DVR equipment within a premises’ security room. There is also the factor that one source could tender the same imagery to multiple media outlets, whether through a media-relations team or simply offering it around.
At least Project Origin will be useful as a method to allow the audience to know the authenticity and provenance of imagery that is purported to corroborate a newsworthy event.
Regularly, I cover on HomeNetworking01.info the issue of fake news and disinformation. This is because of our consumption of news and information being part of our online lives thanks to the ubiquity and affordability of the Intermet.
I have highlighted the use of online sources like social media, Web portals, search engines or news aggregators as our regular news sources along with the fact that it is very easy to spread rumour and disinformation around the world thanks to the ease of publishing the Web provides. As well, it is easy for our contacts to spread links to Web resources or iterate messages in these resources via the Social Web, emails or instant-messaging platforms.
This issue has become of concern since 2016 when fake news and disinformation circulating around the Web was used to distort the outcome of the UK’s Brexit referendum and the US election that brought Donald Trump in to the presidency of that country.
Since then, I have covered efforts by the tech industry and others to make us aware of fake news, disinformation and propaganda such as through the use of fact-checkers or online services implementing robust data-security and account-management policies and procedures. It also includes articles that encourage the use of good-quality traditional media sources during critical times like national elections or the coronavirus and I even see the issue of being armed against fake news and disinformation as part of data security.
The ABC have run a video series as part of their “Behind The News” schools-focused TV show about the media which underscores the value of media literacy and discerning the calibre of news that is being presented. On Tuesday 6 July 2020, I watched “The Drum” and one of the people behind this series described it as being highly relevant viewing for everyone no matter how old we are thanks to the issue of fake news and disinformation being spread around the Web.
It is part of their continued media-literacy efforts like their “Media Watch” TV series run on Monday nights which highlights and keeps us aware of media trends and issues.
In that same show, they even recommended that if we do post something that critiques a piece of fake news or disinformation, we were to reference the material with a screenshot rather than sharing a link to the content. This is because interactions, link shares and the like are often used as a way to “game” social-network and search-engine algorithms, making it easier to discover the questionable material.
The first video looked at how and why fake news has been spread over the ages such as to drive newspaper sales or listenership and viewership of broadcasts. It also touched on how such news is spread including taking advantage of “thought and social bubbles” that we establish. As well, one of the key issues that was highlighted was fact that fake news tends to be out of touch with reality and to encourage us to research further about the article and who is behind it before taking it as gospel and sharing it further.
This second video of the series that touches on the quality of news and information sources that can be used to drive a news story. It examines the difference between the primary sources that provide first-had information “from the horse’s mouth” and secondary sources that evaluate or interpret the information.
It also touches on whether the news source is relying primarily on secondary sources or hearsay vs expert or authoritative testimony. They raise the legitimacy of contrasting opinion like academic debate or where there isn’t enough real knowledge on the topic. But we were warned about news sources that are opinion-dominant rather than fact-dominant. Even issues like false equivalence, bias or use of anonymous sources were identified along with the reason behind the source presenting the information to the journalist or newsroom.
This video even summed up how we assess news sources by emphasising the CRAP rule – Currency (how recent the news is), Reliability (primary vs secondary source as well as reliability of the source), Authority (is the source authoritative on the topic) and Purpose (why was the news shared such as biases or fact vs opinion).
The third video in the series talks about what makes information newsworthy. This depends on who is reporting it and the consumers who will benefit from the information. It also covered news value like timeliness, frequency of occurrence, cultural proximity, the existence of people in the public spotlight along with factors like conflict or the tone of the story. It completely underscored why and how you are told of information that could shape your view of the world.
The fourth video looks at bias within the media and why it is there. The reasons that were called out include to influence the way we think or vote or what goods or services we buy. It also includes keeping the media platform’s sponsors or commercial partners in a positive light, a media platform building an increasingly-large army of loyal consumers, or simply to pander to their audience’s extant biases.
It also looked at personal biases that affect what we consume and how we consume it, including the “I-am-right” confirmation bias, also known as the “rose-tinted glasses” concept. Even looking at how people or ideas are represented on a media platform, what kind of stories appear in that platform’s output including what gets top place; along with how the stories are told with both pictures and words can highlight potential biases. There was also the fact that a personal bias can be influenced to determine what we think of a media outlet.
The last of the videos looks at honesty, accuracy and ethics within the realm of media. It underscores key values like honest, accurate, fair, independent and respectful reporting along with various checks and balances that the media is subject to. Examples of these include the protections that the law of the land offers like the tort of defamation, contempt of court and similar crimes that protect the proper role of the courts of justice, hatred-of-minority-group offences and right-to-privacy offences. There is also the oversight offered by entities like broadcast-standards authorities and press councils who have effective clout.
The legal references that were highlighted were primarily based on what happens within Australia while British viewers may see something very similar there due to implied freedom of speech along with similarly-rigourous defamation laws. The USA may take slightly different approaches especially where they rely on the First Amendment of their Constitution that grants an express freedom of speech.
But it sums up the role of media as a check on the powerful and its power to shine a light on what needs to be focused on. The series also looks at how we can’t take the media for granted and need to be aware of the way the news appears on whatever media platform we use. This is although there is primary focus on traditional media but in some ways it can also be a way to encourage us to critically assess online media resources.
The video series underscores what the news media is about and covers this issue in a platform-agnostic manner so we don’t consider a particular media platform or type as a purveyor of fake or questionable news. As well, the series presents the concepts in a simple-to-understand manner but with the use of dramatisation in order to grab and keep our attention.
Here, I often wonder whether other public-service or community broadcasters are running a similar media-literacy video program that can be pitched at all age levels in order to encourage the communities they reach to be astute about the media they come across.
Increasingly, images and video are being seen as integral to news coverage with most of us seeing them, especially photographs, of importance when corroborating a fact or news story.
But these are becoming weaponised to tell a different truth compared to what is actually captured by the camera. One way is to use the same or a similar image to corroborate a different fact, with this including the use of image-editing tools to doctor the image so it tells a different story.
I have covered this previously when talking about the use of reverse-image-search tools like Tineye or Google Image Search to verify the authenticity of an image and . It will be the same kind of feature that Google has enabled in its search interface when you “google” for something, or in its news-aggregation platforms.
Google is taking this further for people who search for images using their search tools. Here, they are adding images to their fact-check processes so it is easy to see whether an image has been used to corroborate questionable information. You will see a “fact-check” indicator near the image thumbnail and when you click or tap on the image for a larger view or more details, you will see some details about whether the image is true or not.
A similar feature appears on the YouTube platform for exhibiting details about the veracity of video content posted there. But this feature currently is available to users based in Brazil, India and the USA and I am not sure whether it will be available across all YouTube user interfaces, especially native clients for mobile and set-top platforms.
It is in addition to Alphabet, their parent company, offering a free tool to check whether an image has been doctored. This is because meddling with an image to constitute something else using something like Adobe Photoshop or GIMP is being seen as a way to convey a message that isn’t true. The tool, called Assembler, uses artificial intelligence and algorithms that detect particular forms of image manipulation to indicate the veracity of an image.
But I would also see the rise of tools that analyse audio and video material to identify deepfake activity, or video sites, podcast directories and the like using a range of tools to identify the authenticity of content made available through them. This may include “fact-check” labels with facts being verified by multiple newsrooms and universities; or the content checked for out-of-the-ordinary editing techniques. It can also include these sites and directories implementing a feedback loop so that users can have questionable content verified.
As various jurisdictions around the world are “peeling back” the various stay-at-home restrictions once they are sure they have the coronavirus plague under control in their territory, we could easily see our love for many-to-many videoconferencing wane. It can be more so when the barriers are fully down and we are confident about going out and about, or travelling long-distance.
But these many-to-many video-conferencing platforms like Zoom, Skype and Facebook Messenger Rooms do not need to be ignored once we can go out. It is more about keeping these platforms in continual relevance beyond the workplace and as part of personal and community life.
How can you keep these platforms relevant
Are these multi-party video conferences going to die out when the all-clear to meet face-to-face and to travel is given?
Family and friends
Do you have members of your family or community who are separated by distance? Here, each family cluster who can meet up at a particular venue in their local area can implement Zoom, Skype or a similar platform to create a wide-area meetup amongst the clusters. It can also extend to remote members of that family or community using these platforms to “call in” and join the occasion.
This situation will be very real with us taking baby steps to getting back to what we used to do, including long-distance travel. Initially long-distance travel will be put off due to fears of newer coronavirus infections on crowded transport modes like economy-class airline cabins along with countries putting off opening their borders and enabling long-distance domestic travel until they are sure that the Covid-19 beast is under control.
If one of us moves to a place that is a long distance away like overseas or interstate, these videoconferencing platforms become even more relevant as a tool to “keep in touch with home”. For example, once that person has settled in to their home, they could use a smartphone, tablet or highly-portable laptop computer to take those of us who are “at home” on a tour of their new premises.
Similarly, an event like an engagement or “wetting the new baby’s head” that is typically celebrated by small groups of relatives or friends who get together to celebrate with a toast to the lucky couple or parents can be taken further. Here, these small clusters could effectively “join up” as part of a larger virtual cluster involving the people whom the occasion is about in order to celebrate together.
For education, distance learning can continue to be made relevant especially for people who can’t attend the class in person. This includes underserved rural and remote communities, people who are in hospital and similar places or itinerant students. There can also be a blended-learning approach that can be taken where a class can both be face-to-face and remote.
Teachers can use videoconferencing to teach classes at the school even if they are home due to illness, caring for relatives or similar situations. It is important for those teachers who place value in curriculum continuity for their students no matter what. Foreign-language teachers who are engaging in personal travel to the country associated with the language they are teaching can use aspects of the trip for curriculum enrichment. With this they could “call in” to their classes at home from that country and engage with the country’s locals or demonstrate its local culture and idiosyncrasies.
A school’s student-exchange program can also benefit from videoconferencing by having remote exchange students able to “call in” to their home school. With this the students could share their experiences and knowledge about the remote location with their “home” class.
To the same extent, a school could link up with one or more guest speakers so that speaker can enrich the class with extra knowledge and experiences. It can even help those schools who can’t afford frequent field trips especially long-distance trips to be able to benefit from knowledge beyond the classroom.
In the worship context, videoconferencing technology can be about allowing mission workers to call their home church and present their report to their home congregation by video link. It can even appeal towards multiple-campus churches who want to be part of these video links.
This technology is still relevant to those small Bible-study / prayer / fellowship groups that are effectively smaller communities within a church’s community. Here, these groups could maintain videoconferencing as a way to allow members separated from the group on a temporary basis to effectively “call in” and participate during the meetups. In some cases where one of these groups becomes too large that they “break up” to smaller groups, they could implement the many-to-many videoconferencing technologies to host larger group meetups on an occasional basis.
Of course there are the key occasions that are part of a religious community’s life like the weddings or funerals. Here, it could be feasible to provide a video link-up so that people who can’t attend the services associated with these events in person due to ill-health or long-distance can view them on line.
As well, the supporting parties associated with these events can become global shared celebrations comprising of multiple local celebration clusters using video-link technology. This is more so with families and communities who are split up by distance but want to celebrate together.
Other community organisations who thrive on being close-knit could easily see the multi-party video-conference as being relevant especially for members who are far-flung from where they usually meet. As well, those who have presence in multiple geographic areas can exploit the likes of Zoom or Skype to make ad-hoc virtual meetings that don’t cost much to organise.
What can be done
Increased support for group videocalling on the big screens
If a mobile-platform vendor has an investment in their mobile platform along with a set-top-box platform (that’s you, Apple with your iOS and Apple TV, and Google with your Android and Chromecast), they could work towards enabling their set-top platform towards group videophone functionality.
Here, this idea would require the smartphone or tablet, which has the contact list and the user-interface for the videocalling platforms like Facetime, Zoom, Skype or Facebook Messenger; to be able to manage the calls while a camera attached to the top of the TV is linked to the set-top box which works with the videocalling platform as a screen, camera, speakers and microphone.
I wrote in a previous article about this idea and the two ways it can be done. One of these is to have a lean software interface in both devices that link the smartphone to the set-top box and have the caller’s face and voice on the TV with the camera linked to the set-top box bringing your face and voice to the caller. The smartphone would then run the videocalling platform, allowing the user to control the call from that device.
The other is to have the videocalling platform software on the set-top box with the ability to use the smartphone to manage accounts, callers and the like from its surface. This is similar to how DIAL is being used by Netflix and YouTube to permit users to “throw” content from smartphones or computers to smart TVs and set-top devices equipped with client software for these platforms.
The videoconferencing platforms of the Zoom, Skype and Facebook Messenger Rooms ilk can be of use beyond the pandemic shutdown, serving as a way to bridge distance and bring communities together.
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.