Category: Social Web

European Union deems Big Tech companies and services as gatekeeepers

Article

European Union flag - Creative Commons by Rock Cohen - https://www.flickr.com/photos/robdeman/

The EU will be using two new tools to regulate Big Tech significantly

EU names six tech giant ‘gatekeepers’ under DMA guidelines | Mashable

From the horse’s mouth

European Union

Digital Markets Act: Commission designates six gatekeepers (europa.eu)

My Comments

The European Union is taking serious steps towards controlling Big Tech further and enforcing a competitive market within its territory.

They recently passed the Digital Markets Act and Digital Services Act laws which apply to companies that have a significant market presence in the EU. The former one is about assuring real competition by doing things like pry open app stores to competition, require a service to accept advertising for its competitors or assure end-users have access to the data they generate through their services. As well, the latter one regulates online services to assure a user experience with these services that is safe and in harmony with European values as well as supporting innovation and competitiveness.

Initially, six powerful Big Tech companies have been designated as “gatekeepers” under the Digital Markets Act. These are Alphabet (Google, Jigsaw, Nest), Amazon, Meta (Facebook, Facebook Messenger, Instagram, Threads, WhatsApp), Apple, ByteDance (TikTok) and Microsoft.

Google Play Android app store

The European laws will also be about prying open the app-store marketplace for mobile platform devices

Most of the products like Facebook, Instagram, TikTok, YouTube, Amazon’s marketplaces, the familiar Google search engine, and the mobile app stores ran by Apple and Google are listed services or platforms subject to scrutiny as “gateways”. Even the iOS, Android and Microsoft Windows desktop operating systems are also deemed “gateways” under this law. But I am surprised that the Apple MacOS operating system wasn’t even deemed as a “gateway” under that law.

There is further investigation about Microsoft’s Bing search platform, Edge browser and Advertising platform and Apple’s iMessage messaging service regarding deeming them as “gateways”.

The latter one has attracted intense scrutiny from the computing press due to it not being fully interoperable with Android users who use first-party messaging clients compliant with the standards-based RCS advanced-messaging platform put forward by the GSM Association. This causes a significantly-reduced messaging experience if iPhone users want to message Android users, such as not being able to share higher-resolution images.

What happens is that “Gatekeeper” IT companies will be under strict compliance measures with requirement to report to the European Commission. These include requirements to:

  • accept competitors on their platform, which will apply to app stores, operating systems and online advertising platforms
  • ensure that end-users have access to data they generate on the platform
  • allow end-users and merchants to complete transactions away from app-store and similar platforms owned by the gatekeeper company
  • assure independent verification by advertisers of ad impressions that occur on their ad-tech platform

At the moment, an online service or similar IT company is considered a “gatekeeper” if they have:

  • EUR€7.5bn turnover
  • EUR€75 billion market capitalisation
  • 45 million or more active users in the 27 European-Union member countries

Personally, I would like to see the geographic realm for active users based on a larger area in Europe because of non-EU countries like Switzerland, Norway, Iceland and the UK and EU-candidate countries also contributing to the user base. For example, this could be based on the European Economic Area or membership of the Council of Europe which standardises fundamental human-rights expectations in Europe.

Failure to comply will see the company face fines of 10% of its global turnover, even the ability for the European Union bureaucrats to subject a company to a Standard Oil / AT&T style forced breakup.

At the moment, it is about EU setting an example on reining in Big Tech with DMA being considered a gold standard by the consumer IT press just as GDPR was considered a gold standard for user privacy. But the United Kingdom is putting a similar recommendation in place by introducing the Digital Markets, Competition and Consumer Bill before Parliament. This is while the USA are trying to pry open app stores with various anti-trust (competitive-trade) and similar legislation.

A question that will also arise is whether the European Union bureaucrats can effectively have control over corporations anywhere in the world such as to force the breakup of a dominant corporation that is chartered in the USA for example. This is although they could exert this power over a company’s local affiliate offices that exist within Europe for example.

There is still a very serious risk of Big Tech “dumping” non-compliant software and services in to jurisdictions that aren’t covered by these regulations. This will typically manifest in software or services that have the features desired by customers like sideloading or competitive app-store access for mobile operating systems or ad-free subscription versions of social networks being only available in Europe for example. This was a practice that happened with Microsoft when the EU forced them to allow the end-user to install an alternative Web browser when they install Windows as part of commissioning a new computer for example, with this feature only occurring within Europe.

A previous analogy I used is what has been happening with the vehicle market in Australia where vehicles that aren’t fuel-efficient to current international expectations appear in this country whereas other countries benefit from those vehicles that are fuel-efficient. This is due to Australia not implementing the fleet-wide fuel-efficiency standards being used in many countries around the world.

Who knows how long it will take to push similar legislation or regulation aimed at curbing Big Tech’s marketplace powers around the world. Only time will tell.

When use of multiple public accounts isn’t appropriate

Article

Facebook login page

There are times where use of public accounts isn’t appropriate

The murky world of politicians’ covert social media accounts (sbs.com.au)

My Comments

Just lately there have questions raised about how Australian politicians and their staff members were operating multiple online personas to disparage opponents, push political ideologies or “blow their own trumpet”.

It is being raised in connection with legislative reforms that the Australian Federal Government are working on to place the onus of responsibility regarding online defamation on whoever is posting the defamatory material in a comments trail on an online service. This is different to the status quo of having whoever is setting up or managing an online presence like a Website or Facebook Page being liable for defamation.

Here, it is in the context of what is to be expected for proper political communication including any “government-to-citizen” messaging. This is to make sure we can maintain trust in our government and that all political messaging is accurate and authentic in the day and age of fake news and disinformation.

I see this also being extended to business communication, including media/marketing/PR and non-profit advocacy organisations who have a high public profile. Here, it is to assure that any messaging by these entities is authentic so that people can build trust in them.

An example of a public-facing online persona – the Facebook page of Dan Andrews, the current Premier of Victoria

What I refer to as an “online persona” are email, instant-messaging and other communications-service accounts; Web pages and blogs; and presences on various part of the Social Web that are maintained by a person or organisation. It is feasible for a person or organisation to maintain a multiplicity of online personas like multiple email accounts or social-media pages that are used to keep public and private messaging separate, whether that’s at the business or personal level.

The normal practice for public figures at least is to create a public online persona and one or two private online personas such as an intra-office persona for colleagues and a personal one for family and friends. This is a safety measure to keep public-facing communications separate from business and personal communications.

Organisations may simply create particular online personas for certain offices with these being managed by particular staff members. In this case, they do this so that communications with a particular office stay the same even as office-holders change. As well, there is the idea of keeping “business-private” material separate from public-facing material.

In this case, the online personas reference the same entity by name at least. This is to assure some form of transparency about who is operating that persona. Other issues that come in to play here include which computing devices are being used to drive particular online personas.

This is more so for workplaces and businesses that own computing and communications hardware and have staff communicate on those company-owned devices for official business. But staff members use devices they bought themselves to operate non-official online personas. This is although more entities are moving towards “BYOD” practices where staff members use their own devices for official work use and there are systems in place to assure secure confidential work from staffer-owned devices.

But there is concern about some Australian politicians creating multiple public-facing personas in order to push various ideologies. Here, these personas are operated in an opaque manner in order to create multiple discrete persons. This technique, when used to appear as though many vouch for a belief or ideology, is referred to under terms like sockpuppetry or astroturfing.

This issue is being raised in the context of government-citizen communication in the online era. But it can also be related to individuals, businesses, trade unions or other organisations who are using opaque means to convey a sense of “popular support” for the same or similar messages.

What I see as being appropriate with establishing multiple online personas is that there is some form of transparency about which person or organisation is managing the different online personas. That includes where there are multiple “child” online personas like Websites operated by a “parent” online persona like an organisation. This practice comes in to being where online personas like email addresses and microsites (small Websites with specific domain names) are created for a particular campaign but aren’t torn down after that campaign.

As well, it includes what online personas are used for what kind of communications. This includes what is written on that “blue-ticked” social-media page or the online addresses that are written on business cards or literature you had out to the public.

Such public-communications mandates will also be required under election-oversight or fair-trading legislation so people know who is behind the messaging and these are important if it is issues-based rather than candidate-based. If an individual is pushing a particular message under their own name, they will have to state whether an entity is paying or encouraging them to advance the message.

This is due to most of us becoming conscious of online messaging from questionable sources. It is thanks to the popular concern about fake news and disinformation and its impact on elections since 2016 thanks to the Brexit referendum and Donald Trump’s presidential victory in the USA. It is also due to the rise of the online influencer culture where brands end up using big-time and small-time celebrities and influencers to push their products, services and messages online.

SwabDogsOfInstagram–an Instagram account to provide light relief in these times

Articles

The Swab Dogs Instagram Account Is the Too-Pure Antidote to Covid – And What Lockdown Melbourne Needs Right Now (broadsheet.com.au)

CUTE! Swab Dogs on Instagram – MamaMag

Meet the COVID-19 ‘swab dogs’ of Melbourne making everyone smile – ABC News

coronavirus victoria: ‘Swab dogs’ brighten testing days (9news.com.au)

Instagram account

Chipper (@swabdogsofinsta) • Instagram photos and videos

 

View this post on Instagram

 

A post shared by Chipper (@swabdogsofinsta)

My Comments

As part of the ongoing effort to tackle the COVID-19 coronavirus plague, many a jurisdiction is setting up COVID-19 testing facilities around their cities and towns. These are to identify who has and hasn’t caught this virus so proper measures can be taken.

In most situations the tests being done are PCR-based tests that require a mucus sample to be taken from the patient’s nose and throat for analysis, with the results coming through by at the most 24 hours. An increasing number of these facilities are of the “drive-through” kind where the patients drive up to the facilities and the nurses take the swabs from the patients while they sit in their cars.

But staff at one such “pop-up” facility in Melbourne noticed that most of the patients coming through that site had a dog with them in their car. Typically patients did this due to the pet being a security blanket or simply to take the dog out in the car with them. As well the staff, who were working long days in full PPE clothing, found that the presence of these furry companions in the patients’ cars as something to lift their spirits.

One of the staff noticed this and asked patients if they can take photos of their canine companions then, when the patient agreed, they took the photos. These ended up on a special Instagram account which was effectively a “photo reel” of these dog and other animal pictures.

This has started to come popular with more staff at other drive-through COVID testing clinics contributing these images to the account’s photo roll especially as pop-up testing clinics relocated around the city. Even other drive-through testing sites around Australia and the rest of the world were contributing the pictures of their patients’ furry companions to this reel. Of course, being started in Australia, the “SwabDogsOfInsta” account features a few kelpies and Australian Cattle Dogs (Blue Heelers) as part of the montage.

The “SwabDogsOfInsta” ended up as a popular Instagram account to follow around the world with significant media coverage about it. People would leave various lovely comments about these dogs with some referencing the animal in the photo being similar to one they had at some point in their life journey. As well, these pictures come in to their own to cheer everyone up during these difficult times.

Even with the newer COVID Omicron variant doing the rounds and these drive-through testing clinics being “bogged” with many people seeking PCR-based swab tests, this reel is being built out with more of the dogs that are accompanying the many patients at the facilities.

If Instagram had the ability to work with screen-saver / automatic-wallpaper-changer software or electronic picture frames and has the ability to show particular accounts’ images on that kind of software or hardware, this could be the kind of account that would work well with such a setup.

But simply, it’s a good account to add to your Instagram “follow” collection if you are wanting something that takes your mind off difficult times.

Alternative “free-speech”social networks are coming to the fore

Article Parler login page screenshot

‘Free speech’ social networks claim post-election surge | Engadget

My Comments

A new breed of social networks are becoming popular with user groups who see Facehook, Twitter, YouTube and co as being equivalent to mainstream media, especially the popular TV channels.

These networks are based in North America, yet outside Silicon Valley. This means that they don’t subscribe to the perceived groupthink associated with Silicon Valley / Northern Californian culture.

As well, they came to the fore in response to Facebook, Twitter and Google responding to the issues of fake news and disinformation with these online companies implementing fact-checking mechanisms and flagging questionable material. Let’s not forget that there is social and business pressure on the established social media companies to clamp down on racial and similar hatred. That meant they had to implement robust user and content management policies to manage what appeared on these sites which is something that can be very difficult with large networks.

Previously the only way to offer content that isn’t controlled by the social media establishment was to set up and run a blog or forum. This required a fair bit of technical knowhow and you had to also have a domain name and a business relationship with a Web hosting service. Then you would have to run something like WordPress, phpBB or vBulletin so you can concentrate on running the blog or forum.

It also included exposing your content amongst your desired audience which may require you to use established social networks but use codified dog-whistle language in your posts. As well, you would have to make sure that your presence on the established social networks exists simply to draw traffic to your site. To “make it pay”, you would have to set up a shopfront on the Website to sell merchandise, offer advertising space typically to small businesses, or even run the site on a freemium model with a subscription-driven membership system.

The networks are Parler and MeWe which also have iOS and Android native clients available through Apple’s and Google’s mobile app stores. Gab also exists but Apple and Google won’t admit native clients for this service to their app stores. Rumble offers a video-focused service that works similarly to YouTube, with this service being cross-promoted on Parler, MeWe and Gab.

These alternative social networks implement business models that are less dependent on advertising like subscription-driven “freemium” setups. Along with that, the networks adopt “light-touch” policies regarding the management of users and the content they share, with them billing themselves as “free-speech” alternatives.

There has been strong interest in these networks over the past year which has been highlighted in the number of accounts created and the number of native mobile clients downloaded from the mobile-platform app stores. This is due to the USA’s knife-edge Presidential election and the COVID-19 coronavirus plague and there are some people wanting to seek out information that isn’t “fit for television” i.e. accepted by traditional media and the main Silicon Valley social networks.

Unlike previous alternative-media setups like community broadcasting, small-scale newspapers, computer bulletin boards and the early days of the Internet, these networks are gaining a strong following amongst the hard right including conspiracy theorists and Trump loyalists. There is even interest amongst the USA’s Republican Party to shift towards these services as a way to move from what they see as the “left-leaning media establishment”, something that is symptomatic of how hyper-partisan the US has become.

A question that will be raised is how large these networks’ user bases will be in a few years’ time after the dust settles on Donald Trump and the COVID-19 pandemic. But I see them and newer alternative social networks maintaining their position especially for those who relate with others that have opinions or follow topics that are “against the grain”.

WhatsApp to allow users to search the Web regarding content in their messages

WhatsApp Search The Web infographic courtesy of WhatsApp

WhatsApp to allow you to search the Web for text related to viral messages posted on that instant messaging app

Article

WhatsApp Pilots ‘Search the Web’ Tool for Fact-Checking Forwarded Messages | Gizmodo Australia

From the horse’s mouth

WhatsApp

Search The Web (blog post)

My Comments

WhatsApp is taking action to highlight the fact that fake news and disinformation don’t just get passed through the Social Web. Here, they are highlighting the use of instant messaging and, to some extent, email as a vector for this kind of traffic which has been as old as the World Wide Web.

They have improved on their previous efforts regarding this kind of traffic initially by using a “double-arrow” icon on the left of messages that have been forwarded five or more times.

But now they are trialling an option to allow users to Google the contents of a forwarded message to check their veracity. One of the ways to check a news item’s veracity is whether one or more news publishers or broadcasters that you trust are covering this story and what kind of light they are shining on it.

Here, the function manifests as a magnifying-glass icon that conditionally appears near forwarded messages. If you click or tap on this icon, you start a browser session that shows the results of a pre-constructed Google-search Weblink created by WhatsApp. It avoids the need to copy then paste the contents of a forwarded message from WhatsApp to your favourite browser running your favourite search engine or to the Google app’s search box. This is something that can be very difficult with mobile devices.

But does this function break end-to-end encryption that WhatsApp implements for the conversations? No, because it works on the cleartext that you see on your screen and is simply creating the specially-crafted Google-search Weblink that is passed to whatever software handles Weblinks by default.

An initial pilot run is being made available in Italy, Brazil, Ireland (Eire), UK, Mexico, Spain and the USA. It will be part of the iOS and Android native clients and the messaging service’s Web client.

WhatsApp could evolve this function further by allowing the user to use different search engines like Bing or DuckDuckGo. But they would have to know of any platform-specific syntax requirements for each of these platforms and it may be a feature that would have to be rolled out in a piecemeal fashion.

They could offer the “search the Web” function as something that can be done for any message, rather than only for forwarded messages. I see it as being relevant for people who use the group-chatting functionality that WhatsApp offers because people can use a group chat as a place to post that rant that has a link to a Web resource of question. Or you may have a relative or friend who simply posts questionable information as part of their conversation with you.

At least WhatsApp are adding features to their chat platform’s client software to make it easer to put the brakes on disinformation spreading through it. This could he something that could be investigated by other instant-messaging platforms including SMS/MMS text clients.

A digital watermark to identify the authenticity of news photos

Articles

ABC News 24 coronavirus coverage

The news services that appear on the “screen of respect” that is main TV screen like the ABC are often seen as being “of respect” and all the screen text is part of their identity

TNI steps up fight against disinformation  | Advanced Television

News outlets will digitally watermark content to limit misinformation | Engadget

News Organizations Will Start Using Digital Watermarks To Combat Fake News |Ubergizmo

My Comments

The Trusted New Initiative are a recently formed group of global news and tech organisations, mostly household names in these fields, who are working together to stop the spread of disinformation where it poses a risk of real-world harm. It also includes flagging misinformation that undermines trust the the TNI’s partner news providers like the BBC. Here, the online platforms can review the content that comes in, perhaps red-flagging questionable content, and newsrooms avoid blindly republishing it.

ABC News website

.. as well as their online presence – they will benefit from having their imagery authenticated by a TNI watermark

One of their efforts is to agree on and establish an early-warning system to combat the spread of fake news and disinformation. It is being established in the months leading up to the polling day for the US Presidential Election 2020 and is flagging disinformation were there is an immediate threat to life or election integrity.

It is based on efforts to tackle disinformation associated with the 2019 UK general election, the Taiwan 2020 general election, and the COVID-19 coronavirus plague.

Another tactic is Project Origin, which this article is primarily about.

An issue often associated with fake news and disinformation is the use of imagery and graphics to make the news look credible and from a trusted source.

Typically this involves altered or synthesised images and vision that is overlaid with the logos and other trade dress associated with BBC, CNN or another newsroom of respect. This conveys to people who view this online or on TV that the news is for real and is from a respected source.

Project Origin is about creating a watermark for imagery and vision that comes from a particular authentic content creator. This will degrade whenever the content is manipulated. It will be based around open standards overseen by TNI that relate to authenticating visual content thus avoiding the need to reinvent the wheel when it comes to developing any software for this to work.

One question I would have is whether it is only readable by computer equipment or if there is a human-visible element like the so-called logo “bug” that appears in the corner of video content you see on TV. If this is machine-readable only, will there be the ability for a news publisher or broadcaster to overlay a graphic or message that states the authenticity at the point of publication. Similarly, would a Web browser or native client for an online service have extra logic to indicate the authenticity of an image or video footage.

I would also like to see the ability to indicate the date of the actual image or footage being part of the watermark. This is because some fake news tends to be corroborated with older lookalike imagery like crowd footage from a similar but prior event to convince the viewer. Some of us may also look at the idea of embedding the actual or approximate location of the image or footage in the watermark.

There is also the issue of newsrooms importing images and footage from other sources whose equipment they don’t have control over. For example, an increasing amount of amateur and videosurveillance imagery is used in the news usually because the amateur photographer or the videosurveillance setup has the “first images” of the news event. Then there is reliance on stock-image libraries and image archives for extra or historical footage; along with newsrooms and news / PR agencies sharing imagery with each other. Let’s not forget media companies who engage “stringers” (freelance photographers and videographers) who supply images and vision taken with their own equipment.

The question with all this, especially with amateur / videosurveillance / stringer footage taken with equipment that media organisations don’t have control over is how such imagery can be authenticated by a newsroom. This is more so where the image just came off a source like someone’s smartphone or the DVR equipment within a premises’ security room. There is also the factor that one source could tender the same imagery to multiple media outlets, whether through a media-relations team or simply offering it around.

At least Project Origin will be useful as a method to allow the audience to know the authenticity and provenance of imagery that is purported to corroborate a newsworthy event.

What can be done about taming political rhetoric on online services?

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Online services may have to observe similar rules to traditional media and postal services when it comes to handling election and referendum campaigns

There’s a simple way to reduce extreme political rhetoric on Facebook and Twitter | FastCompany

My Comments

In this day and age, a key issue that is being raised regarding the management of elections and referenda is the existence of extreme political rhetoric on social media and other online services.

But the main cause of this problem is the algorithmic nature associated with most online services. This can affect what appears in a user’s default news feed when they start a Facebook, Twitter or Instagram session; whether a bulk-distributed email ends up in the user’s email inbox or spam folder; whether the advertising associated with a campaign appears in search-driven or display online advertising; or if the link appears on the first page of a search-engine user experience.

This is compared to what happens with traditional media or postal services while there is an election or referendum. In most of the democracies around the world, there are regulations overseen by the electoral-oversight, broadcasting and postal authorities regarding equal access to airtime, media space and the postal system by candidates or political parties in an election or organisations defending each option available in a referendum. If the medium or platform isn’t regulated by the government such as what happens with out-of-home advertising or print media, the peak bodies associated with that space establish equal lowest-cost access to these platforms through various policies.

Examples of this include an equal number of TV or radio commercial spots made available at the cheapest advertising rate for candidates or political parties contesting a poll, including the same level of access to prime-time advertising spaces; scheduled broadcast debates or policy statements on free-to-air TV with equal access for candidates; or the postal service guaranteeing priority throughput of election matter for each contestant at the same low cost.

These regulations or policies are to make it hard for a candidate, political party or similar organisation to “game” the system but allow voters to make an informed choice about whom or what they vote for. But the algorithmic approach associated with the online services doesn’t guarantee the candidates equal access to the voters’ eyeballs thus requiring the creation of incendiary content that can go viral and be shared amongst many people.

What needs to happen is that online services have to establish a set of policies regarding advertising and editorial content tendered by candidates, political parties and allied organisations in order to guarantee equal delivery of the content.  This means marking such content so as to gain equal rotation in an online-advertising platform; using “override markers” that provide guaranteed recorded delivery of election matter to one’s email inbox or masking interaction details associated with election matter posted on a Facebook news feed.

But the most important requirement is that the online platforms cannot censor or interfere with the editorial content of the message that is being delivered to the voters by them. It is being seen as important especially in a hyper-partisan USA where it is perceived by conservative thinkers that Silicon Valley is imposing Northern-Californian / Bay-Area values upon people who use or publish through their online services.

A question that can easily crop up is the delivery of election matter beyond the jurisdiction that is affected by the poll. Internet-based platforms can make this very feasible and it may be considered of importance for, say, a country’s expats who want to cast their vote in their homeland’s elections. But people who don’t live within or have ties to the affected jurisdiction may see it as material of little value if there is a requirement to provide electoral material beyond a jurisdiction’s borders. This could be answered through social-media and email users, or online publishers having configurable options to receive and show material from multiple jurisdictions rather than the end-user’s current jurisdiction.

What is being realised here is that online services will need to take a leaf out of traditional regulated media and communication’s playbook to guarantee election candidates’ fair equal access to the voters through these platforms.

Facebook now offers a way to turn off political ads on its main platforms

Article Facebook login page

Don’t want political ads in your Facebook or Instagram feed? You’ll be able to turn that off | CNet

From the horse’s mouth

Facebook

Launching The Largest Voting Information Effort in US History (Press Release)

Videos

Control Political Ad Content on Facebook (Click or tap to play)

Control Political Ad Content on Instagram (Click or tap to play)

My Comments

Facebook is introducing a feature that allows its users to effectively “mute” political advertising including issues-driven advertising on their main social-Web platform as well as Instagram.

This feature will be available to USA-based accounts as part of Facebook’s voter-information features for the 2020 Presidential Elections. That includes information on how and where to register along with where and when to vote, including early-voting (pre-poll voting) and postal-voting information. It underscores Facebook’s role as part of Silicon Valley’s effort to “get out the vote” in the USA.

Personally I am not sure whether this setup will provide information relevant to American expats who have moved to other countries like how their local US embassy or consulate is facilitating their vote. It is because in most cases these expats will still have voting rights of some sort for US elections.

The option will be available in the “Ad Preferences” option for your platform’s user-account settings on both Facebook and Instagram. Or both platforms will have a contextual option, highlighted under a stylised “i”, available for political ads allowing you to see fewer ads of this type, This can be set up using your Web-based user experience or the official native mobile-platform apps that you use for working these platforms with.

Of course, there won’t be the ability to regulate editorial content from media organisations that is posted or shared through Facebook or Instagram. This will be an issue when you deal with media outlets that have a highly-partisan editorial policy. Nor will there be the ability to control posts, shares and comments from Pages and Profiles that aren’t shared as a paid advertisement.

There may also be questions about whether your favourite politician’s, political party’s or civic-society organiation’s Facebook or Instagram traffic will appear in your platform’s main view especially if they pay to increase viewership of these posts. It can be of concern for those of us who have a strong role in political and civic society and see the Facebook traffic as a “news-ticker” for the political entities we engage with.

Facebook has an intent to roll this feature out to other countries where they have established systems for managing and monitoring political advertising on their platforms. At least they are the first online ad platform that allows users to have control over the political and issue advertising that they see while they use that platform.

Keeping the same character within your online community

Article

Facebook login page

Online communities do represent a lot of hard work and continuous effort including having many moderators

General Election 2019: Has your local Facebook group been hijacked by politics? | BBC News

My Comments

The past UK General Election highlighted an issue with the management of online communities, especially those that are targeted at neighbourhoods.

In the BBC News article, a local Facebook group that was used by a neighbourhood specifically for sharing advice, recommending businesses, advertising local events, “lost-and-found” and similar purposes was steered from this purpose to a political discussion board.

You may or may not think that politics should have something to do with your neighbourhood but ordinarily, it stays very well clear. That is unless you are dealing with a locally-focused issue like the availability of publicly-funded services like healthcare, education or transport infrastructure in your neighbourhood. Or it could be about a property development that is before the local council that could affect your neighbourhood.

How that came about was that it was managed by a single older person who had passed away. Due to the loss of an administrator, the group effectively became a headless “zombie” group where there was no oversight over what was being posted.

That happened as the UK general election was around the corner with the politics “heating up” especially as the affected neighbourhood was in a marginal electorate. Here, the neighbourhood newsgroup “lost it” when it came to political content with the acrimony heating up after the close of polls. The site administrator’s widow even stated that the online group was being hijacked by others pushing their own agendas.

Subsequently, several members of that neighbourhood online forum stepped in to effectively wrest control and restore sanity to it. This included laying down rules against online bullying and hate speech along with encouraging proper decent courtesy on the bulletin board. It became hard to effectively steer back the forum to that sense of normalcy due to pushback by some members of the group and the established activity that occurred during the power vacuum.

This kind of behaviour, like all other misbehaviour facilitated through the Social Web and other Internet platforms, exploits the perceived distance that the Internet offers. It is something you wouldn’t do to someone face-to-face.

What was being identified was that there was a loss of effective management power for that online group due to the absence of a leader which maintained the group’s character and no-one effectively steps up to fill the void. This can easily happen with any form of online forum or bulletin board including an impromptu “group chat” set up on a platform like WhatsApp, Facebook Messenger or Viber.

It is like a real-life situation with an organisation like a family business where people have put in the hard yards to maintain a particular character. Then they lose the effective control of that organisation and no-one steps up to the plate to maintain that same character. This kind of situation can occur if there isn’t continual thought about succession planning in that organisation’s management especially if there aren’t any young people in the organisation who are loyal to its character and vision.

An online forum should have the ability and be encouraged to have multiple moderators with the same vision so others can “take over” if one isn’t able to adequately continue the job anymore. Here, you can discover and encourage potential moderators through their active participation online and in any offline events. But you would need to have some people who have some sort of computer and Internet literacy as moderators so they know their way around the system or require very minimal training.

The multiplicity of moderators can cater towards unforseen situations like death or sudden resignation. It also can assure that one of the moderators can travel without needing to have their “finger on the pulse” with that online community. In the same vein, if they or one of their loved ones falls ill or there is a personal calamity, they can concentrate on their own or their loved one’s recovery and rehabilitation or managing their situation.

There will be a reality that if a person moves out of a neighbourhood in good faith, they will have maintained regular contact with their former neighbours. Here they would be trying to keep their “finger on the pulse” regarding the neighbourhood’s character.  This fact can be exploited with managing a neighbourhood-focused online community by them being maintained as a “standby moderator” where they can be “roped in” to moderate the online community if there are too few moderators.

To keep the same kind of “vibe” within that online community that you manage will require many hands at the pump. It is not just a one-person affair.

WhatsApp now highlights messaging services as a fake-news vector

Articles

WhatsApp debuts fact-checking service to counter fake news in India | Engadget

India: WhatsApp launches fact-check service to fight fake news | Al Jazeera

From the horse’s mouth

WhatsApp

Tips to help prevent the spread of rumors and fake news {User Advice)

Video – Click or tap to play

My Comments

As old as the World-Wide-Web has been, email has been used as a way to share online news amongst people in your social circle.

Typically this has shown up in the form of jokes, articles and the like appearing in your email inbox from friends, colleagues or relatives, sometimes with these articles forwarded on from someone else. It also has been simplified through the ability to add multiple contacts from your contact list to the “To”, “Cc” or “Bcc” fields in the email form or create contact lists or “virtual contacts” from multiple contacts.

The various instant-messaging platforms have also become a vector to share links to articles hosted somewhere on the Internet in the same manner as email, as has the carrier-based SMS and MMS texting platforms when used with a smartphone.

But the concern raised about the distribution of misinformation and fake news has been focused on the popular social media and image / video sharing platforms. This is while fake news and misinformation creep in to your Inbox or instant-messaging client thanks to one or more of your friends who like passing on this kind of information.

WhatsApp, a secure instant-messaging platform owned by Facebook, is starting to tackle this issue head-on with its Indian userbase as that country enters its election cycle for the main general elections. They are picking up on the issue of fake news and misinformation thanks to the Facebook Group being brought in to the public limelight due to this issue. As well, Facebook have been recently clamping down on inauthentic behaviour that was targeting India and Pakistan.

WhatsApp now highlighting fake news problem in India, especially as this platform is seen as a popular instant-messenger within that country. They are working with a local fact-checking startup called Proto to create the Checkpoint Tipline to allow users to have links that are sent to them verified. It is driven on the base of a “virtual contact” that the WhatsApp users forward questionable links or imagery to.

But due to the nature of its end-to-end encryption and the fact that service is purely a messaging service, there isn’t the ability to verify or highlight questionable content. But they also have placed limits on the number of users one can broadcast a message to in order to tame the spread of rumours.

It is also being used as a tool to identify the level of fake news and misinformation taking place on the messenger platform and to see how much of a vector these platforms are.

Personally, I would like to see the various fact-checking agencies have an email mailbox where you can forward emails with questionable links and imagery to so they can verify that rumour mail doing the rounds. It could operate in a similar vein to how the banks, tax offices and the like have set up mailboxes for people to forward phishing email to so these organisations can be aware of the phishing problem they are facing.

The only problem with this kind of service is that people who are astute and savvy are more likely to use it. This may not affect those of us who just end up passing on whatever comes our way.