The CAPTCHA is being used as a means to prevent spam emails or comments on Websites or to assure that people who register in an online context are real people.
But these measures, typically ranging from transcribing letters or identifying objects, can be very frustrating for many people. This is caused by hard-to-read or small letters or instructions relating to object identification being difficult to understand on a language or cultural context. As well, some of these CAPTCHAs don’t work well for mobile setups like smartphones which is increasingly the common way to use the Internet. That leads to abandoned registrations or online-shopping carts or people not joining in to online services for example.
you scanning your fingerprint on your flaptop’s fingerprint scanner or you entering your device’s PIN code to prove that a person is entering the data
CloudFlare are working on a different approach to authenticating the personhood of a device user without resorting to letters to transcribe or objects to identify. Initially they are using USB security keys for this purpose but are moving towards full WebAuthN implementation for this purpose.
This approach will work with WebAuthN-capable browser and operating-system setups and work in a similar vein to password-free authentication for online services using that technology. This will require you to enter your device PIN, use face recognition or use the fingerprint reader, operate a USB security key or an authenticator app on your smartphone to prove your personhood, as if you are enrolling in to an online service that implements WebAuthN technology.
The success or failure of the WebAuthN test will simply allow you to submit that form or not on the Website. The logic won’t cause any extra identifying factors to be stored on the online service’s server under default setups. But it may store a device-local cookie to record success so as to treat the session as authenticated, catering towards data revision approaches in wizard-based forms or long data-entry sessions.
A question I would have with this CloudFlare approach is how it can work with computing setups that don’t support WebAuthN. This will also include shared computing setups and public-access computers where the use of this kind of authentication may not be practicable for a single session.
But Cloudflare’s effort is taking WebAuthN further as a way to prove that a real person rather than a robot is actually operating an online account in a manner that is universal to abilities, languages and cultures.
As the COVID-19 coronavirus plague had us homebound and staying indoors, we were making increased use of Zoom and similar multi-party video conference software for work, education and social needs. This included an increased amount of telemedicine taking place where people were engaging with their doctors, psychologists and other specialists using this technology.
Thus increased ubiquity of multi-party videoconferencing raised concerns about data-security, user-privacy and business-confidentiality implications with this technology. This was due to situations like business videoconference platforms being used for personal videoconferencing and vice versa. In some cases it was about videoconferencing platforms not being fit for purpose due to gaping holes in the various platforms’ security and privacy setup along with the difficult user interfaces that some of these platforms offered.
During August 2020, the public data-protection authorities in Australia, Canada, Hong Kong, Gibraltar, Switzerland and the UK called this out as a serious issue through the form of open letters to the various popular videoconferencing platforms. There has been some improvement taking place with some platforms like Zoom implementing end-to-end encryption, Zoom implementing improved meeting-control facilities and some client software for the various platforms offering privacy features like defocusing backgrounds.
Zoom has now answered the call for transparency regarding user privacy by notifying all the participants in a multi-party videoconference about who can save or share content out of the videoconference. This comes in to play with particular features and apps like recording, transcription, polls and Q&A functionality. It will also notify others if someone is running a Zoom enhanced-functionality app that may compromise other users’ privacy.
There is also the issue of alerting users about who the account owner is in relation to these privacy issues. For corporate or education accounts, this would be the business or educational institution who set up the account. But most of us who operate our personal Zoom accounts would have the accounts in our name.
Personally, I would also like to have the option to know about data-sovereignty information for corporate, education or similar accounts. This can be important if Zoom supports on-premises data storage or establishes “data-trustee” relationships with other telco or IT companies and uses this as a means to assure proper user privacy, business confidentiality and data sovereignty. A good example of this could be the European public data cloud that Germany and France are wanting to set up to compute with American and Chinese offerings while supporting European values.
Another issue is how this will come about during a video conference where the user is operating their session full-screen with the typical tile-up view but not using the enhanced-functionality features. Could this be like with Websites that pop up a consent notification disclosing what cookies or similar features are taking place when one uses the Website for the first time or moves to other pages?
It will be delivered as part of the latest updates for Zoom client software across all the platforms. This may also be a feature that will have to come about for other popular videoconferencing platforms like Microsoft Teams or Skype as a way to assure users of their conversation privacy and business confidentiality.
There is a constant data-security and user-privacy risk associated with mobile computing.
And this is being underscored heavily as a significant number of mobile apps are part of “app-cessory” ecosystems for various Internet-of-Things devices. That is where a mobile app is serving as a control surface for one of these devices. Let’s not forget that VPNs are coming to the fore as a data-security and user-privacy aid for our personal-computing lives.
Expect this to appear alongside mobile-platform apps to signify they are designed for security
But how can we be sure that an app that we install on our smartphones or tablets is written to best security practices? What is being identified is a need for an industry standard supported by a trademarked logo that allows us to know that this kind of software is written for security.
A group called the Internet of Secure Things Alliance, known as ioXT, have started to define basic standards for secure Internet-of-Things ecosystems. Here they have defined various device profiles for different Internet-of-Things device types and determined minimum and recommended requirements for a device to be certified as being “secure” by them. This then allows the vendor to show a distinct ioXT-secure logo on the product or associated material.
Now Google and others have worked with ioXT to define a Mobile Application Profile that sets out minimum security standards for mobile-platform software in order to be deemed secure by them. At the moment, this is focused towards app-cessory software that works with connected devices along with consumer-facing privacy-focused VPN endpoint software. For that matter, Google is behind a “white-box” user-privacy VPN solution that can be offered under different labels.
This device profile has been written in an “open form” to cater towards other mobile app classes that need to have specific data-security and user-privacy requirements. This will come about as ioXT revises the Mobile Application Profile.
The ioXT Internet-of-Secure-Things platform could be extended to certifying more classes of native mobile-platform and desktop-platform software that works with the Internet of Everything. The VPN aspect of the Mobile Application Profile can also apply to native desktop VPN-management clients or native and Web software intended to manage router-based VPN setups.
At least a non-perpetual certification program with a trademarked logo now exists for the Internet of Everything and mobile apps to assure customers that the hardware and software is secure by design and default.
I have just read a Swiss article which talked about the US and Chinese hyperscale cloud platforms dominating the European cloud-computing scene. But this article is stating that European cloud-computing / online-service providers are catching up with these behemoths. Here these companies are using data protection as a selling point due to data-protection and user-privacy concerns by European businesses and government authorities.
A recent survey completed by the French IT consultant Capgemini highlighted that the German-speaking part of Europe (Germany, Australia and Switzerland) were buying minimal European IT services. But the same Capgemini survey were saying that 45 of the respondents wanted to move to European providers in the future thanks to data protection and data sovereignty issues.
Data security is being given increasing importance due to recent cyber attacks and the increased digitalisation of production processes. But the Europeans have very strong data protection and end-user privacy mandates at national and EU level thanks to a strong respect for privacy and confidentiality within modern Europe.
COVID-19 had placed a lot of European IT projects on ice but there has been a constant push to assure business continuity even under the various public-health restrictions mandated by this plague. This includes the support for distributed working whether that be home-office working or remote working.
But how is this relevant to European households, small businesses and community organisations? I do see this as being relevant due to the use of various online and cloud IT services as part of our personal life thanks to the like of search engines, email / messaging, the Social Web, online entertainment, and voice driven assistants. As well, small businesses and community organisations show interest in online and cloud-based computing as a means of benefiting from what may be seen as “big-time” IT without needing much in the way of capital expenditure.
It will be a slow and steady effort for Europe to have online and cloud computing on a par with the US and Asian establishment but this will be about services that respect European privacy, security and data-sovereignty values.
During the COVID-19 pandemic causing us to work or study from home, we have been seeing increased use of videoconferencing platforms like Zoom.
It has led to the convergence of business and personal use of popular multiparty videoconferencing platforms; be it business platforms of the Zoom and Microsoft Teams ilk serving personal, social and community needs; or personal platforms like Skype and WhatsApp being used for business use. This is more so with small businesses, community organisations and the like who don’t have their own IT team to manage this software. The software developers even support this convergence through adding “personal and social” features to business users that also gain free social-user tiers or adding business features to personal platforms.
But this has brought along its fair share of miscreants. A key example of this is “Zoombombing” where these miscreants join a Zoom meeting in order to disrupt it. This manifests in disruptive comments being put in to the meeting or at worst all sorts of filth unfit for the office or family home appearing on our screens. Infact there have been a significant number of high-profile Zoom virtual events disrupted that way and a significant number of governments have encompassed this phenomenon as part of raising questions about videoconferencing platform security.
This has been facilitated by Zoom and similar business videoconferencing platforms allowing people to join a videoconference by clicking on a meeting-specific URL This is compared to Skype, Viber, Facebook Messenger, WhatsApp and similar personal videoconferencing platforms operating on an in-platform invitation protocol when joining these meetings.
But these Weblinks bave been posted on the Social Web for every man and his dog to see. There have been some online forums that have been hurriedly set up for people to solicit others to disrupt online meetings.
Zoom recently took action by requiring the use of meeting passwords and waiting-room setups and operating with that by default. As well meeting hosts and participants have been encourage not to place meeting URLs and passwords on any part of the Web open to the public. Rather they are to send the link via email or instant messaging. As well, they are encouraged to send the password under separate cover.
They also have the ability to lock the meeting so no further attendees can come in, which is good if the meeting is based around known attendees. There is also the ability for the host to control resource-sharing and remote-control functionality that Zoom offers. Let’s not forget that they also added meeting-wide end-to-end encryption for increasingly-secure meetings.
But Zoom has taken further action by offering meeting hosts more tools to control their meeting, a feature available to all client software and to all user classes whether free or paid.
There is the ability for the Zoom meeting host to pause the meeting. Once this is invoked, no activity can take place during the meeting including in any breakout rooms that the meeting has spawned. They also have the ability to report the meeting to Zoom’s platform=wide security team and to selectively enable each meeting feature. They can also report users to Zoom’s platform security team, which allows them to file the report and give the disruptive user the royal order of the boot from that meeting.
Another feature that has been introduced thanks to the “join by URL” method that Zoom supports is for meeting hosts to be alerted if their meeting is at risk of disruption. Zoom facilitates this using a Webcrawler that hunts for meeting URLs on the public Web and alerts the meeting host if their meeting’s URL is posted there such as being on the Social Web. Here, they are given the opportunity to change the URL to deflect any potential Zoombomb attempts.
But this year has become a key year as far as multiparty videoconferencing is concerned due to our reliance on it. Here, it may be about seeing less differentiation between business-use and personal-use platforms or the definition of a basic feature set that these videoconferencing platforms are meant to have with secure private operation being part of that definition.
Most recently-built desktop and laptop regular computers that run Windows, especially business-focused machines offered by big brands, implement a secure element known as the Trusted Platform Module. This is where encryption keys for functions like BitLocker, Windows Hello or Windows-based password vaults are kept. But this is kept as a separate chip on the computer’s motherboard in most cases.
But Microsoft are taking a different approach to providing a secure element on their Windows-based regular-computer platform. Here, this is in the form of keeping the Trusted Platform Module on the same piece of silicon as the computer’s main CPU “brain”.
Microsoft initially implemented a security-chip-within-CPU approach with their XBox platform as a digital-rights-management approach. Other manufacturers have implemented this approach in some form or another for their computing devices such as Samsung implementing in the latest Galaxy S smartphones or Apple implementing it as the T2 security chip within newer Macintosh regular computers. There is even an Internet-of-Things platform known as the Azure Sphere which implements the “security-chip-within-CPU” approach.
This approach works around the security risk of a person gaining physical access to a computer to exfiltrate encryption keys and sensitive data held within the Trusted Platform Module due to it being a separate chip from the main CPU. As well, before Microsoft announced the Pluton design, they subjected it to many security tests including stress-tests so that it doesn’t haunt them with the same kind of weaknesses that affect the Apple T2 security chip which was launched in 2017.
Intel, AMD and Qualcomm who design and make CPUs for Windows-based regular computers have worked with Microsoft to finalise this “security-chip-within-CPU” design. Here, they will offer it in subsequent x86-based and ARM-based CPU designs.
The TPM application-programming-interface “hooks” will stay the same as far as Windows and application-software development is concerned. This means that there is no need to rewrite Windows or any security software to take advantage of this chipset design. The Microsoft Pluton approach will benefit from “over-the-air” software updates which, for Windows users, will come as part of the “Patch Tuesday” update cycle.
More users will stand to benefit from “secure-element” computing including those who custom-build their computer systems or buy “white-label” desktop computer systems from independent computer stores.
As well, Linux users will stand to benefit due to efforts to make this open-source and available to that operating-system platform. In the same context, it could allow increasingly-secure computing to be part of the operating system and could open up standard secure computing approaches for Linux-derived “open-frame” computer platforms like Google’s ChromeOS or Android.
Here, the idea of a secure element integrated within a CPU chip die isn’t just for digital-rights-management anymore. It answers the common business and consumer need for stronger data security, user privacy, business confidentiality and operational robustness. There is also the goal of achieving secure computing from the local processing silicon to the cloud for online computing needs.
Microsoft hasn’t opened up regarding whether the Pluton trusted-computing design will be available to all silicon vendors or whether there are plans to open-source the design. But this could lead to an increasingly-robust secure-element approach for Windows and other computing platforms.
TruePic, an image authentication service, are partnering with Qualcomm to develop hardware-based authentication of images as they are being taken. Qualcomm has become the first manufacturer of choice because of themselves being involved with ARM-based silicon for most Android smartphones and the Windows 10 ARM platform.
This will use actual time and date, data gained from various device sensors and the image itself as it is taken to attach a certificate of authenticity to that image or video footage. This will be used to guarantee the authenticity of the photos or vision before they leave the user’s phone.
TruePic primarily implements this technology in industries like banking, insurance, warranty provision and law enforcement to work against fraudulent images being used to corroborate claims or to where imagery has to be of high forensic standards. But at the moment, Truepic implements this technology as an additional app that users have to install.
The partnership with Qualcomm is to integrate the functionality in to the smartphone’s camera firmware so that the software becomes more tamper-evident and this kind of authentication applies to all images captured by that sensor at the user’s discretion.
The fact that TruePic is partnering with Qualcomm at the moment is because most of the amateur photos are being taken with smartphones which use this kind of silicon. Once they have worked with Qualcomm, other camera chipmakers including Apple would need to collaborate with them to build in authenticated image technology in to their camera technology.
It can then appeal to implementation within standalone camera devices like traditional digital cameras, videosurveillance equipment, dashcams and the like. For example, it can be easier to verify media footage shot on pro gear as being authentic or to have videosurveillance footage being offered as evidence verified as being forensically accurate. But in these cases, there may be calls for the devices to be able to have access to highly-accurate time and location references for this to work.
The watermark generated by this technology will be intended to be machine-readable and packaged with the image file. This will make it easier for software to show whether the image is authentic or not and such software could be part of the Trusted News Initiative to authenticate amateur, stringer or other imagery or footage that comes in to a newsroom’s workflow. Or it could be used by eBay, Facebook or Tinder to indicate whether images or vision are a genuine representation of the goods for sale or the p
But this technology needs to also apply to images captured by dedicated digital cameras like this Canon PowerShot G1 X
The idea of providing this function would be to offer it as an opt-in manner, typically as a shooting “mode” within a camera application. This allows the photographer to preserve their privacy. But the use of authenticated photos won’t allow users to digitally adjust their original photos to make them look better. This same situation may also apply to the use of digital zoom which effectively crops photos and videos at the time they are taken.
There is the idea of implementing distributed-ledger technology to track edits made to a photo. This can be used to track at what point the photo was edited and what kind of editing work took place. This kind of ledger technology could also apply to copies of that photo, which will be of importance where people save a copy of the image when they save any edits. This will also apply where a derivative work is created from the source file like a still image or a short clip is obtained from a longer file of existing footage.
A question that will then come about is how the time of day is recorded in these certificates, including the currently-effective time zone and whether the time is obtained from a highly-accurate reference. Such factors may put in to doubt the forensic accuracy of these certificates as far as when the photo or footage was actually taken.
For most of us, it could come in to its own when combatting deepfake and doctored images used to destabilise society. Those of us who use online dating or social-network platforms may use this to verify the authenticity of a person who is on that platform, thus working against catfishing. Similarly, the use of image authentication at the point of capture may come in to its own when we supply images or video to the media or to corroborate transactions.
Since the COVID-19 coronavirus plague had us housebound even for work or school, we have ended up using videoconferencing platforms more frequently for work, school and social life. The most popular of these platforms ended up being Zoom which effectively became a generic trademark for multiparty videoconferencing.
Now Zoom, as part of its recent Zoomtopia feature-launch multiparty videoconference, has launched a number of new features for their platform. These include virtual participant layouts similar to what Microsoft Teams is offering.
But the important one here is to facilitate end-to-end encryption during multiparty videoconferences. This will be available across all of Zoom’s user base, whether free or paid. For the first 30 days from next week, it will be a technical preview so they can know of any bugs in the system.
The end-to-end encryption is based around the meeting host rather than Zoom generating the keypairs for the encryption protocol, which would occur as a videoconference is started and as users come on board. It is a feature that Zoom end-users would need to enable at account level and also activate for each meeting they wish to keep secure. That is different from WhatsApp where end-to-end encryption occurs by default and in a hands-off manner.
At the moment, updated native Zoom clients will support the end-to-end encryption – you won’t have support for it on Zoom Web experiences or third-party devices and services that work with Zoom like the smart displays or Facebook’s Portal TV videophone. This situation will be revised as Zoom releases newer APIs and software that answers thsi need.
If a meeting is operating with end-to-end encryption, there will be a green shield with a lock symbol in the upper left corner to indicate that this is the case. They can click on the icon to bring up a verification code and have that confirmed by the meeting host reading it out loud.
Free users will be required to use SMS-based verification when they set up their account for end-to-end encryption. This is a similar user experience to what a lot of online services are doing where there is a mobile phone number as a second factor of authenticity.
At least Zoom is taking steps towards making its multiparty videoconference platform more safe and secure for everyone.
I read in Gizmodo how an incendiary hashtag directed against Daniel Andrews, the State Premier of Victoria in Australia, was pushed around the Twittersphere and am raising this as an article. It is part of keeping HomeNetworking01.info readers aware about disinformation tactics as we increasingly rely on the Social Web for our news.
What is a hashtag
A hashtag is a single keyword preceded by a hash ( # ) symbol that is used to identify posts within the Social Web that feature a concept. It was initially introduced in Twitter as a way of indexing posts created on that platform and make them easy to search by concept. But an increasing number of other social-Web platforms have enabled the use of hashtags for the same purpose. They are typically used to embody a slogan or idea in an easy-to-remember way across the social Web.
Most social-media platforms turn these hashtags in to a hyperlink that shows a filtered view of all posts featuring that hashtag. They even use statistical calculations to identify the most popular hashtags on that platform or the ones whose visibility is increasing and present this in meaningful ways like ranked lists or keyword clouds.
How this came about
Earlier on in the COVID-19 coronavirus pandemic, an earlier hashtag called #ChinaLiedPeopleDied was working the Social Web. This was underscoring a concept with a very little modicum of truth that the Chinese government didn’t come clear about the genesis of the COVID-19 plague with its worldwide death toll and their role in informing the world about it.
That hashtag was used to fuel Sinophobia hatred against the Chinese community and was one of the first symptoms of questionable information floating around the Social Web regarding COVID-19 issues.
Australia passed through the early months of the COVID-19 plague and one of their border-control measures for this disease was to have incoming travellers required to stay in particular hotels for a fortnight before they can roam around Australia as a quarantine measure. The Australian federal government put this program in the hands of the state governments but offered resources like the use of the military to these governments as part of its implementation.
The second wave of the COVID-19 virus was happening within Victoria and a significant number of the cases was to do with some of the hotels associated with the hotel quarantine program. This caused a very significant death toll and had the state government run it to a raft of very stringent lockdown measures.
A new hashtag called #DanLiedPeopleDied came about because it was deemed that the Premier, Daniel Andrews, as the head of the state’s executive government wasn’t perceived to have come clear about any and all bungles associated with its management of the hotel quarantine program.
On 14 July 2020, this hashtag first appeared in a Twitter account that initially touched on Egyptian politics and delivered its posts in the Arabic language. But it suddenly switched countries, languages and political topics, which is one of the symptoms of a Social Web account existing just to peddle disinformation and propaganda.
The hashtag had laid low until 12 August when a run of Twitter posts featuring it were delivered by hyper-partisan Twitter accounts. This effort, also underscored by newly-created or suspicious accounts that existed to bolster the messaging, was to make it register on Twitter’s systems as a “trending” hashtag.
Subsequently a far-right social-media influencer with a following of 116,000 Twitter accounts ran a post to keep the hashtag going. There was a lot of very low-quality traffic featuring that hashtag or its messaging. It also included a lot of low-effort memes being published to drive the hashtag.
The above-mentioned Gizmodo article has graphs to show how the hashtag appeared over time which is worth having a look at.
What were the main drivers
But a lot of the traffic highlighted in the article was driven by the use of new or inauthentic accounts which aren’t necessarily “bots” – machine operated accounts that provide programmatic responses or posts. Rather this is the handiwork of trolls or sockpuppets (multiple online personas that are perceived to be different but say the same thing).
As well, there was a significant amount of “gaming the algorithm” activity going on in order to raise the profile of that hashtag. This is due to most social-media services implementing algorithms to expose trending activity and populate the user’s main view.
Why this is happening
Like with other fake-news, disinformation and propaganda campaigns, the #DanLiedPeopleDied hashtag is an effort to sow seeds of fear, uncertainty and doubt while bringing about discord with information that has very little in the way of truth. As well the main goal is to cause a popular distrust in leadership figures and entities as well as their advice and efforts.
In this case, the campaign was targeted at us Victorians who were facing social and economic instability associated with the recent stay-at-home orders thanks to COVID-19’s intense reappearance, in order to have us distrust Premier Dan Andrews and the State Government even more. As such, it is an effort to run these kind of campaigns to people who are in a state of vulnerability, when they are less likely to use defences like critical thought to protect themselves against questionable information.
As I know, Australia is rated as one of the most sustainable countries in the world by the Fragile States Index, in the same league as the Nordic countries, Switzerland, Canada and New Zealand. It means that the country is known to be socially, politically and economically stable. But we can find that a targeted information-weaponisation campaign can be used to destabilise a country even further and we need to be sensitive to such tactics.
One of the key factors behind the problem of information weaponisation is the weakening of traditional media’s role in the dissemination of hard news. This includes younger people preferring to go to online resources, especially the Social Web, portals or news aggregator Websites for their daily news intake. It also includes many established newsrooms receiving reduced funding thanks to reduced advertising, subscription or government income, reducing their ability to pay staff to turn out good-quality news.
When we make use of social media, we need to develop a healthy suspicion regarding what is appearing. Beware of accounts that suddenly appear or develop chameleon behaviours especially when key political events occur around the world. Also be careful about accounts that “spam” their output with a controversial hashtag or adopt a “stuck record” mentality over a topic.
Any time where a jurisdiction is in a state of turmoil is where the Web, especially the Social Web, can be a tool of information warfare. When you use it, you need to be on your guard about what you share or which posts you interact with.
Here, do research on hashtags that are suddenly trending around a social-media platform and play on your emotions and be especially careful of new or inauthentic accounts that run these hashtags.
Thanks to the COVID-19 coronavirus plague, we are making increased use of various videoconferencing platforms for our work, education, healthcare, religious and social reasons.
This has been facilitated through the use of applications like Zoom, Skype, Microsoft Teams and HouseParty. It also includes “over-the-top” text-chat and Internet-telephony apps like Apple’s Facetime, Facebook’s Messenger, WhatsApp and Viber for this kind of communication, thanks to them opening up or having established multi-party audio/video conferencing or “party-line” communications facilities.
Security issues have been raised by various experts in the field about these platforms with some finding that there are platforms that aren’t fit for purpose in today’s use cases thanks to gaping holes in the platform’s security and privacy setup. In some cases, the software hasn’t been maintained in a manner as to prevent security risks taking place.
As well, there have been some high-profile “Zoombombing” attacks on video conferences in recent times. This is where inappropriate, usually pornographic, images have been thrown up in to these video conferences to embarrass the participants with one of these occurring during a court hearing and one disrupting an Australian open forum about reenergising tourism.
This has led to the public data-protection and privacy authorities in Australia, Canada, Gibraltar, Hong Kong, Switzerland and the United Kingdom writing an open letter to Microsoft, Cisco, Zoom, HouseParty and Google addressing these issues. I also see this relevant to any company who is running a text-based “chat” or similar service that offers group-chatting or party-line functionality or adapts their IP-based one-to-one audio/video telephony platform for multi-party calls.
Some of these issues are very similar to what has been raised over the last 10 years thanks to an increase in our use of online services and cloud computing in our daily lives.This included data security under a highly-mobile computing environment with a heterogeny of computing devices and online services; along with the issue of data sovereignty in a globalised business world.
One of the key issues is data security. This is about having proper data-security safeguards in place such as end-to-end encryption for communications traffic; improved access control like strong passwords, two-factor authentication or modern device-based authentication approaches like device PINs and biometrics.
There will also be the requirement to factor in handling of sensitive data like telehealth appointments between medical/allied-health specialists and their patients. Similarly data security in the context of videoconferencing will also encompass the management of a platform’s abilities to share files, Weblinks, secondary screens and other media beyond the video-audio feed.
As well, a “secure by design and default” approach should prohibit the ability to share resources including screenviews unless the person managing the videoconference gives the go-ahead for the person offering the resource. If there is a resource-preview mechanism, the previews should only be available to the person in charge of the video conference.
Another key issue is user privacy including business confidentiality. There will be a requirement for a videoconferencing platform to have “privacy by design and default”. It is similar to the core data-security operating principle of least privilege. It encompasses strong default access controls along with features like announcing new participants when they join a multi-party video conference; use of waiting rooms, muting the microphone and camera when you join a video conference with you having to deliberately enable them to have your voice and video part of the conference; an option to blur out backgrounds or use substitute backgrounds; use of substitute still images like account avatars in lieu of a video feed when the camera is muted; and the like.
There will also be a requirement to allow businesses to comply with user-privacy obligations like enabling them to seek users’ express consent before participating. It also includes a requirement for the platform to minimise the capture of data to what is necessary to provide the service. That may include things like limiting unnecessary synchronsing of contact lists for example.
Another issue is for the platforms to to “know their audience” or know what kind of users are using their platform. This is for them to properly provide these services in a privacy-focused way. It applies especially to use of the platform by children and vulnerable user groups; or where the platform is being used in a sensitive use setting like education, health or religion.
As well it encompasses where a videoconferencing platform is used or has its data handled within a jurisdiction that doesn’t respect fundamental human rights and civil liberties. This risk will increase more as countries succumb to populist rule and strongman politics and they forget the idea of these rights. In this case, participants face an increased exposure to various risks associated with these jurisdictions especially if the conversation is about a controversial topic or activity or they are a member of a people group targeted by the oppressive regime.
Another issue being raised is transparency and fairness. Here this is about what data is being collected by the platform, how it is being used, whom it is shared with including the jurisdictions they are based in along with why it is being collected. It doesn’t matter whether it is important or not. The transparency about data use within the platform also affects what happens whenever the platform is evolved and the kind of impact any change would have.
The last point is to provide each of the end-users effective control over their experience with the videoconferencing platforms. Here, an organisation or user group may determine that a particular videoconferencing platform like Zoom or Skype is the order of the day for their needs. But the users need to be able to know whether location data is being collected or whether the videoconference is tracking their engagement, or whether it is being recorded or transcribed.
I would add to this letter the issue of the platform’s user-friendliness from provisioning new users through all stages of establishing and managing a videoconference. This is of concern with videoconference platforms being used by young children or older-generation people who have had limited exposure to newer technologies. It also includes efforts to make the platform accessible to all abilities.
This is relevant to the security and user privacy of a videoconferencing platform due to simplifying the ability for the videoconference hosts and participants to maintain effective control of their experience. Here, if a platform’s user interface is difficult to use safely. videoconference hosts and participants will end up opting for insecure setups this making themselves vulnerable.
For example, consistent and less-confusing function icons or colours would be required for the software’s controls; along with proper standardised “mapping” of controls on hardware devices to particular functions. Or there could be a user-interface option that always exposes the essential call-management controls at the bottom of the user’s screen during a videocall.
This issue has come to my mind due to regularly participating in a Skype videoconference session with my church’s Bible-study group. Most of the members of that group were of older generations who weren’t necessarily technology-literate. Here, I have had to explain what icons to click or tap on to enable the camera or microphone during the videoconference and even was starting it earlier to “walk” participants through using Skype. Here, it would be about calling out buttons on the screen that have particular icons for particular functions like enabling the camera or microphone or selecting the front or back camera on their device.
At least the public-service efforts have come about to raise the consistent security and privacy problems associated with the increased use of videoconferencing software.
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.