Songwhip link – availability on streaming or download-to-own services (7”single equivalent)
Beatport (download-to-own music service pitched at DJs) – 7” single equivalent
There is an album of the same name and recorded by the same artist featuring this song as an “album-length” track. It is available as an LP record or as a CD. You can get this at Amazon or your favourite record store may have a copy of it on hand or can order it for you if you want it playing on your turntable or CD player.
Over this past year, a South-African song with Zulu lyrics ended up becoming a musical symbol of hope through this COVID season.
This song, “Jerusalema”, was recorded in 2019 by Master KG but when it appeared online in 2020 along with a set of associated dance moves, it became very popular. There were a series of dance challenges where individuals or groups of people performed the dance associated with this song and uploaded music videos of their performances.
It was all concurrent at the time when the COVID-19 coronavirus plague was an unknown quantity and governments implemented measures to limit the spread of this virus. Such measures manifested in the form of border and travel restrictions, stay-at-home orders and lockdowns; mask-wearing and social-distancing mandates; amongst other things.
At the same time, there was Donald Trump and Jair Bolsonaro who were treating this pandemic with contempt and creating disdain against these necessary public health restrictions and the medical-research races for treatments and vaccines. Even the act of honouring one or more of these public-health measures became politically-charged within the USA.
It was also aggravated by the death of George Floyd at the hands of American police offices which brought on the Black Lives Matter protest movement. This movement also highlighted how divisive things were within the USA when it came to civil rights and the treatment of marginalised minorities in that country and was aggravated by Donald Trump’s behaviour during the protests.
The song, its music and the associated dances conveyed a comfortable “feel-good” vibe along with a thread that unites the various communities of people whether “over-the-wire” using the Internet or face-to-face where the various restrictions allowed it. This helped with boosting public moral through this season. There was also a celebration of the survivors and of survival.
Let’s not forget the Zulu-language lyrics and the associated melody were conveying a message of escapism from the continuing barrage of bad news we were facing. This is very much like how other catchy popular music songs played by oneself during hard times can be seen as a form of escapism.
As well, “Jerusalema” has caused us to show interest in Afrobeat and placed Africa on the popular-music map. This follows on from the way African-heritage diasporas have contributed to popular music over the past century and a bit through the form of jazz, funk, soul, disco and similar musical styles along with musical techniques like rapping and breakdancing.
YouTube and similar services are replete with videos of these dance challenges done by various groups of people. There are even some European airlines who have had aircrew groups perform this dance and make a video in the name of the airline as a way of saying that we will be back in the air again. The Irish Gardai national police and the Swiss federal police each had some officer teams within their forces create similar videos as an effort to boost public morale within their nations.
There are also some videos existing on YouTube about how to perform the dance routine associated with this song. These resources can be worth referring to if you want to know how to perform the dance.
“Jerusalema” and its associated dance routine will be seen in the same light as some of those songs which had or acquired their own dance routines that market out particular years or eras. Think of songs like “YMCA” by the Village People; “Forever” by Chris Brown with its wedding-dance video; “Macarena” by Los Del Rio; or “Vogue” by Madonna.
But it will also continue to be seen as a song of hope for the COVID-19 season just like Gloria Gaynor’s “I Will Survive” became a song of hope through the 1970s or John Lennon’s “Imagine” being a song of hope through the Vietnam War era.
Since the “dot-com” era of the late 1990s, there has been a very mythical home appliance often cited by Internet visionaries. This is the Internet fridge or “smart fridge” which is a regular household refrigerator equipped with Internet connectivity and a large built-in display.
It is expected to provide access to a wide range of online services like online shopping, online photo albums, email and messaging, and online music services. It is also expected to keep track of the food and drink that is held therein using a simple inventory-management program.
In the context of the smart home, the Internet fridge is expected to be a “dashboard” or “control surface” for lighting. heating and other equipment associated with the home. Often the vision for the smart home is to have as many control surfaces around the home to manage what happens therein like setting up HVAC operating temperatures or turning lighting on and off according to particular usage scenarios.
The Internet-fridge idea is based on the concept of the typical household refrigerator’s door ending up as the noticeboard for that household thanks to its role as the main food-storage location for the people and pets therein. There is the thriving trade in “fridge magnets” that people use to decorate their fridge’s door. Let’s not forget that some households have even put a radio or TV on top of the fridge that they can flick on for information or entertainment in the kitchen.
Who is making these appliances?
At the moment, Samsung and LG are making Internet-fridges in production quantities available to the market. These are typically positioned as American-style wide-format fridges that also have the integrated ice makers. Samsung offers theirs in a few different compartment configurations with the cheapest being a two-door fridge-freezer arrangement.
But most of the other white-goods manufacturers exhibit examples of these Internet fridges at trade fairs primarily as proof-of-concept or prototype designs. These are typically based on common fridge-freezer designs already on the market but are modified with Internet functionality.
But the Internet-fridge idea has not become popular with most people. Why is that so?
One issue is to do with the computer hardware associated with the Internet-fridge concept. These setups typically have a separate computer from the microcontroller circuitry associated with keeping the appliance’s compartments to the appropriate temperature or managing ice-maker or chilled-water functionality. But this computer hardware is effectively integrated in the appliance in a manner that makes it hard for users to upgrade to newer expectations.
This means that if this computer fails or gets to a point where it is “end-of-life”, the user loses the full functionality associated with the Internet fridge. The same thing can happen if, for example, the touchscreen that the user uses to interact with the Internet fridge’s online abilities fails to work.
It is underscored by the fact that a household refrigerator is in that class of appliance that is expected to serve a household for many years. As I have seen, many households will buy a new fridge when an old fridge fails to operate properly or when they are making a new house and want to upgrade their fridge. This is even though a lot of consumer IT equipment isn’t expected to provide that length of service thanks to rapidly-advancing technology.
Another factor is the software and online services associated with the Internet fridge. Typically this is engineered by the appliance manufacturer to provide the “branded experience” that the manufacturer wants to convey to the consumer.
The questions associated with the software focus around the appliance manufacturer’s continual attention to software security and quality over the lifetime of the Internet fridge. It includes protecting the end-users’ privacy as they use this appliance along with allowing the appliance to do its job properly and in a food-safe manner.
I would also add to this the competitive-trade issues associated with online services. Here, appliance manufacturers could easily create exclusive agreements with various online-service providers and not allow competing service providers access to the Internet-fridge platform. It can extend to online-shopping platforms that tie in with the inventory-management software associated with the Internet fridge platform.
Such exclusive partnerships with online service providers or online-shopping platforms will make it difficult for customers to use their preferred online-service or online-shopping platform with an Internet fridge. In the case of online-shopping platforms, it will become difficult for smaller, specialist or independent food suppliers to participate in these platforms especially if the platform has “tied up” a significant customer base. That can be achieved with excessive fees and charges or onerous terms and conditions for the merchants.
Let’s not forget that the Internet fridge ended up, like the Aeron-style office chair, being seen as a status symbol associated with the dot-com bubble.
For that matter, householders are using alternative approaches to the same goal touted by the Internet-fridge suppliers. Here, they are using smart speakers like Amazon Echo or Google Home or, if they are after a display-driven solution, they will use a smart display like Amazon Echo Show or a Google-Assistant-based smart display. Let’s not forget that the iPad or Android-based tablets are offering the idea of a ubiquitous control / display surface for the smart home.
What can be done to legitimise the Internet fridge as far as consumers are concerned?
As for the hardware, I would recommend a long-tailed approach which is focused on modularity. Here, newer computer, connection or display modules can be installed in the same fridge by the user or a professional as part of an upgrade approach. It could allow the appliance manufacturer to offer a cheaper range of standard-height household fridges that can be converted to Internet fridges at a later time when the user purchases and installs an “Internet display kit” on their appliance.
Furthermore, if the hardware or connectivity is of a standard form, it could allow a third-party vendor to offer this functionality on a white-label basis to appliance manufacturers who don’t necessarily want to reinvent the wheel. It can also apply to those appliance manufacturers who offer products in a “white-label” form under a distributor’s or retailer’s brand.
One approach I would recommend for software is access to ubiquitous third-party software platforms with a lively developer ecosystem like Android. The platforms should have an app store that maintains software quality. This means that users can install the software associated with what they need for their Internet fridge.
The problem that manufacturers may see with this approach is providing a user interface for controlling how the fridge operates such as setting the fridge, freezer or other compartment temperatures. Here, this could be facilitated by an app that runs as part of the Internet fridge’s display ecosystem. It may also be preferred to provide basic and essential control for the Internet fridge’s refrigeration and allied functionality independent of the Internet display functionality and create a secure firewall between those functions to assure food safety and energy efficiency.
Using open-frame approaches for building Internet-display functionality in to fridges may help with reducing the cost of this kind of functionality in these products. It could also encourage ubiquity in a low-risk form as well as encouraging innovation in this product class.
You need to have access to the latest data representing your computer’s operating system, device drivers and allied software from its manufacturer as a recovery image to simplify any repair / restore efforts or to get your “new toy” up and running as quickly and smoothly as possible.
Recent computers that run MacOS or Windows now come with a partition on their hard disk or SSD that has a copy of the operating system and other software they come with the computer “out of the box”. Or there is the ability to download a recovery image for your computer from the manufacturer’s Website using a manufacturer-supplied app.
It is in lieu of the previous method of delivering an optical disc with the computer that has the operating system and other manufacturer0-supplied software thanks to newer computers not being equipped with optical drives.
Here, this recovery data comes in to play if the operating system fails and you have to reinstate it from a known good copy. An example of this could be the computer being taken over by malware or you need to get it back to “ground zero” before relinquishing it. Or the system disk (hard disk or SSD) fails and you have to put the operating system on a new system disk as part of reconstructing your computing environment.
But Microsoft, Apple and the hardware manufacturers associated with your computer’s internal peripherals update their software regularly as part of their software quality control. There are often the feature updates that add functionality or implement newer device-class drivers that are part of an operating system’s lifecycle.
What typically happens is this recovery image represents the software that came with your computer when it left the factory. It doesn’t include all the newer updates and revisions that took place. Here, if you have had to restore the operating system from that recovery image, you will then have to download the updates from your computer’s manufacturer, the operating system vendor or other software developers to have your computer up-to-date.
The firmware / BIOS updates may not matter due to them being delivered as a “download-to-install” package. This means that when these packages are run, they verify and shift the necessary firmware code to the BIOS / UEFI subsystem for the computer or the firmware subsystems for peripherals supported by the computer’s manufacturer, then subsequently commence and install the installation process.
Questions that can be raised include whether the factory-supplied data should be maintained as the definitive “reference data” for your system. Or whether the computer manufacturer is to provide a means to keep the software up-to-date with the latest versions for your computer.
This will be an issue with manufacturers who prefer to customise the software drivers that run hardware associated with their computer products while end-users prefer to run the latest software drivers offered by the hardware’s manufacturer. This is typically due to the hardware manufacturer’s code being updated more frequently and is of concern with display chipsets like Intel’s integrated-graphics chipsets.
Similarly there is the issue that people are likely to change the software edition that comes with their computer like upgrading to a “Pro” edition of the Windows operating system when the computer came with the Home edition.
An approach that a manufacturer can take over a computer system’s lifetime is to revise the definitive “reference data” set for that system. This could be undertaken when the operating system undergoes a major revision like a feature update. This can be about taking stock of the device drivers and updating them to newer stable code as part of offering the latest “reference data” set.
That allows a user who is doing an operating-system recovery doesn’t need to hunt for and download updates as part of this process if they want the computer running the latest code.
This kind of approach can also come in to its own during the time that the computer system is on the market. It means that during subsequent years, newer computer units receive the latest software updates before they leave the factory. This is so that the computer’s end-user or corporate IT department don’t have to download the latest versions of the operating system, device drivers and other software as part of commissioning their new computer system.
The idea of computer manufacturers keeping their products’ software-recovery data current will benefit all of us whether we are buying that new computer and want to get that “new toy” running or need to reinstate the operating software in our computers due to hardware or software failures.
Smartphones are facilitating our listenership to podcasts
As we listen to more spoken-word audio content in the form of podcasts and the like, we may want to see this kind of audio content easily delineated in a logical manner. For that matter, such content is being listened to as we drive or walk thanks to the existence of car and personal audio equipment including, nowadays, the “do-it-all” smartphones being connected to headphones or car stereos.
This may be to return to the start of a segment if we were interrupted so we really know where we are contextually. Or it could be to go to a particular “article” in a magazine-style podcast if we are after just that article.
Prior attempts to delineate spoken-word content
In-band cue marking on cassette
Some people who distributed cassette-based magazine-style audio content, typically to vision-impaired people, used mixed-in audio marking recorded at high speed to allow a user to find articles on a tape.
This worked with tape players equipped with cue and review functionality, something that was inconsistently available. Such functionality, typically activated when you held down the fast-forward or rewind buttons while the tape player was in play mode, allowed the tape to be ran forward or backward at high speed while you were able to hear what’s recorded but in a high-pitch warbling tone.
With this indexing approach, you would hear a reference tone that delineated the start of the segment in either direction. But if you used the “cue” button to seek through the tape, you would also hear an intelligible phrase that identified the segment so you knew where you were.
Here, this function was dependent on whether the tape player had cue and review operation and required the user to hold down the fast-wind buttons for it to be effective. This ruled out use within car-audio setups that required the use of locking fast-wind controls for safe operation.
Index Marking on CDs
The original CD Audio standard had inherent support for index marking that was subordinate to the track markers typically used to delineate the different songs or pieces. This was to delineate segments within a track such as variations within a classical piece.
Most 1980s-era CD players of the type that connected to your hi-fi system supported this functionality. This was more so with premium-level models and how they treated this function was markedly different. The most basic implementation of this feature was to show the index number on the display after the track number. CD players with eight-digit displays showed the index number as a smaller-sized number after the track number while those with a four or six-digit display had you press the display button to show the track number and index number.
Better implementations had the ability to step between the index marks with this capability typically represented by an extra pair of buttons on the player’s control surface labelled “INDEX”. Some more sophisticated CD players even had direct access to particular index numbers within a track or could allow you to program an index number within a track as part of a user-programmed playlist.
As well, some CDs, usually classical-music discs which feature long instrumental works that are best directly referenced at significant points made use of this feature. Support for this feature died out by the 1990s with this feature focused on marking the proper start of a song. It was considered of importance with live recordings or concept albums where a song or instrumental piece would segue in to another one. This was of importance for the proper implementation of repeat, random (shuffle) play or programmed-play modes so that the song or piece comes in at the proper start.
There was an interest in spoken-word material on CD through the late 1990s with the increase in the number of car CD players installed in cars. This was typically in the form of popular audiobooks or foreign-language courseware and car trips were considered a favourite location for listening to such content. But these spoken-word CDs were limited to using tracks to delineate chapters in a book or lessons within a foreign-language course.
But CD-R with the ability to support on-site short-run replication of limited-appeal content opened the door for content like religious sermons or talks to appear on the CD format. This technology effectively “missed the boat” when it came to support for index marking and most CD-burning software didn’t allow you to place index marks within a track.
The podcast revolution
File-based digital audio and the Internet opened the door to regularly-delivered spoken-word audio content in the form of podcasts. These are effectively a radio show that is in an audio file available to download. They even use RSS Webfeeds to allow listeners to follow podcasts for newer episodes.
Here, podcast-management or media-management software automatically downloads or enqueues podcast episodes for subsequent listening, marking what is listened to as “listened”. Some NAS-based DLNA servers can be set up to follow podcasts and download them to the NAS hard disk as new content, creating a UPnP-AV/DLNA content tree out of these podcasts available to any DLNA-compliant media playback device.
The podcast has gained a strong appeal with small-time content creators who want to create what is effectively their own radio shows without being encumbered by the rules and regulations of broadcasting or having to see radio stations as content gatekeepers.
The podcast has also appealed to radio stations in two different ways. Firstly, it has allowed the station’s talent to have their spoken-word content they broadcast previously available for listeners to hear again at a later time.
It also meant that the station’s talent could create supplementary audio content that isn’t normally broadcast but available for their audience, thus pushing their brand and that of the station further. This includes the creation of frequently-published short-form “snack-sized” content that may allow for listening during short journeys for example.
Secondly a talk-based radio station could approach a podcaster and offer to syndicate their podcast. That is to pay for the right to broadcast the podcast on their radio station in to the station’s market. It would appeal to radio stations having programming that fills in schedule gaps like the overnight “graveyard shift”, weekends or summer holidays while their regular talent base isn’t available. But it can also be used as a way to put a rising podcast star “on the map” before considering whether to have them behind the station’s microphone.
Why chapter marking within podcasts?
A lot of podcast authors typically ran their shows in a magazine form, perhaps with multiple articles or segments within the same podcast. As well, whenever one gave a talk or sermon, they would typically break it down in to points to make it clear to their audience to know where they are. But the idea of delineating within an audio file hasn’t been properly worked out.
This can benefit listeners who are after a particular segment especially within a magazine-style podcast. Or a listener could head back to the start of a logical point in the podcast when they resume listening so they effectively know where they are at contextually.
This can also appeal to ad-supported podcast directories like Spotify who use radio-style audio advertising and want to insert ads between articles or sections of a podcast. The same applies to radio stations who wish to syndicate podcasts. Here they would need to pause podcasts to insert local time and station-identity calls and, in some cases, local advertising spots or news bulletins.
Is this feasible?
The ID3 2 standard which carries metadata for most audio file formats including MP3, AAC and FLAC supports chapter marking within the audio file. It is based around a file-level “table of contents” which determine each audio chapter and can even have textual and graphical descriptions for each chapter.
There is also support for hierarchical table of contents like a list of “points” within each content segment as well as an overall list of content segments. Each of the “table of contents” has a bit that can indicate whether to have each chapter in that “table of contents” played in order or whether they can be played individually. That could be used by an ad-supported podcast directory or broadcast playout program to insert local advertising between entries or not.
What is holding it back?
The main problem with utilising the chapter markers supported within ID3.2 is the lack of proper software support both at the authoring and playback ends of the equation.
Authoring software available to the average podcaster provides inconsistent and non-intuitive support for placing chapter markers within a podcast. This opens up room for errors when authoring that podcast and enabling chapter marking therein.
As well, very few podcast manager and media player programs recognise these chapter markers and provide the necessary navigation functionality. This could be offered at least by having chapter locations visible as tick marks on the seek-bar in the software’s user interface and, perhaps allowing you to hold-down the cue and review buttons to search at the previous or next chapter.
Better user interfaces could list out chapters within a podcast so users can know “what they are up to” while listening or to be able to head to the segment that matters in that magazine-style podcast.
Similarly, the podcast scene needs to know the benefits of chapter-marking a podcast. In an elementary form, marking out a TED Talk, church sermon or similar speech at each key point can be beneficial. For example, a listener could simply recap a point they missed due to being distracted thus getting more value out of that talk. If the podcast has a “magazine” approach with multiple segments, the listener may choose to head to a particular segment that interests them.
The use of chapter marking within podcasts and other spoken-word audio content could make this kind of content easier to deal with for most listeners. Here, it is more about searching for a particular segment within the podcast or beading back to the start of a significant point therein if you were interrupted so you can hear that point in context.
In some countries like the UK, Australia and Germany, regional broadband infrastructure providers set up shop to provide next-generation broadband to a particular geographic area within a country.
This is used to bring next-generation broadband technology like fibre-to-the-premises to homes and businesses within that geographic area. But let me remind you that fibre-to-the-premises isn’t the only medium they use — some of them use fixed wireless or a fibre-copper setup like HFC cable-modem technology or fibre + Ethernet-cable technology. But they aren’t using the established telephone network at all thus they stay independent of the incumbent infrastructure provider and, in some areas like rural areas, that provider’s decrepit “good enough to talk, not good enough for data” telephone wiring.
In the UK especially, most of these operators will target a particular kind of population centre like a rural village cluster (Gigaclear, B4RN, etc), a large town or suburb (Zzoom), city centres (Cityfibre, Hyperoptic, etc) or even just greenfield developments. Some operators set themselves up in multiple population centres in order to get them wired up for the newer technology but all of the operators will work on covering the whole of that population centre, including its outskirts.
This infrastructure may be laid ahead of the incumbent traditional telco or infrastructure operator like Openreach, NBN or Deutsche Telekom or it may be set up to provide a better Internet service than what is being offered by the incumbent operator. But it is established and maintained independently of the incumbent operator.
Internet service offerings
Typically the independent regional broadband infrastructure providers run a retail Internet-service component available to households and small businesses in that area and using that infrastructure. The packages are often pitched to offer more value for money than what is typically offered in that area thanks to the infrastructure that the provider controls.
But some nations place a competitive-market requirement on these operators to offer wholesale Internet service to competing retail ISPs, with this requirement coming in to force when they have significant market penetration.That is usually assessed by the number of actual subscribers who are connected to the provider’s Internet service or the number of premises that are passed by the operator’s street-level infrastructure. In addition, some independent regional infrastructure providers offer wholesale service earlier as a way to draw in more money to increase their footprint.
This kind of wholesale internet service tends to be facilitated by special wholesale Internet-service markets that these operators are part of. Initially this will attract boutique home and small-business Internet providers who focus on particular customer niches. But some larger Internet providers may prefer to take an infrastructure-agnostic approach, offering mainstream retail Internet service across multiple regional service providers.
Support by local and regional government
Local and regional governments are more likely to provide material and other support to these regional next-generation infrastructure operators. This is to raise their municipality’s or region’s profile as an up-to-date community to live or do business within. It is also part of the “bottom-up” approach that these operators take in putting themselves on the map.
In a lot of cases, the regional next-generation infrastructure providers respond to tenders put forward by local and regional governments. This is either to provide network and Internet service for the government’s needs or to “wire up” the government’s are of jurisdiction or a part thereof for next-generation broadband.
There will have to be legislative enablers put forward by national and regional governments to permit the creation and operation of regional next-generation broadband network infrastructure. This could include the creation and management of wholesale-broadband markets to permit retail-Internet competition.
There is also the need to determine how much protection a small regional infrastructure operator needs against the incumbent or other infrastructure operators building over their infrastructure with like offerings. This may be about assuring the small operator sufficient market penetration in their area before others come along and compete, along with providing an incentive to expand in to newer areas.
It will also include issues like land use and urban planning along with creation and maintenance of rights-of-way through private, regulated or otherwise encumbered land for such use including competitors’ access to these rights-of-way.
That also extends to access to physical infrastructure like pits, pipes and poles by multiple broadband service providers, especially where an incumbent operator has control over that infrastructure. It can also extend to use of conduits or dark fibre installed along rail or similar infrastructure expressly for the purpose of creating data-communications paths.
That issue can also extend to how multiple-premises buildings and developments like shopping centres, apartment blocks and the like are “wired up” for this infrastructure. Here, it can be about allowing or guaranteeing right of access to these developments by competing service providers and how in-building infrastructure is provided and managed.
The need for independent regional next-generation broadband infrastructure
But if an Internet-service market is operating in a healthy manner offering value-for-money Internet service like with New Zealand there may not be a perceived need to allow competing regional next-generation infrastructure to exist.
Such infrastructure can be used to accelerate the provision of broadband within rural areas, provide different services like simultanaeous-bandwidth broadband service for residential users or increase the value for money when it comes to Internet service. Here, the existence of this independent infrastructure with retail Internet services offered through it can also be a way to keep the incumbent service operator in check.
The idea of a Zoome or similar platform user joining the same videoconferences frp, multiple devices could be considered in some cases
Increasing when we use a videoconferencing platform, we install the client software associated with it on all the computing devices we own. Then we log in to our account associated with that platform so we can join videoconferences from whatever device we have and suits our needs.
But most of these platforms allow a user to use one device at a time to participate in the same videoconference. Zoom extends on this by allowing concurrent use of devices of different types (smartphone, mobile-platform tablet or regular computer) by the same user account on the same conference.
But why support the concurrent use of multiple devices?
There are some use cases where multiple devices used concurrently may come in handy.
Increased user mobility
especially with tablet computers and 2-in-1s located elsewhere
One of these is to assure a high level of mobility while participating in a videoconference. This may be about moving between a smartphone that is in your hand and a tablet or laptop that is at a particular location like your office.
It can also be about joining the same videoconference from other devices that are bound to the same account. This could be about avoiding multiple people crowding around one computing device to participate in a videoconference from their location, which can lead to user discomfort or too many people appearing in one small screen in a “tile-up” view of a multiparty videoconference. Or it can be about some people participating in a videoconference from an appropriate room like a lounge area or den.
like in a kitchen with this Lenovo Yoga Tab Android tablet
Similarly, one or more users at the same location may want to simply participate in the videoconference in a passive way but not be in the presence of others who are actively participating in the same videoconference. This may simply be to monitor the call as it takes place without the others knowing. Or it could be to engage in another activity like preparing food in the kitchen while following the videocall.
As far as devices go, there may be the desire to use a combination of devices that have particular attributes to get the most out of the videocall. For example, it could be about spreading a large videoconference across multiple screens such as having a concurrent “tile-up” view, active speaker and supporting media across three screens.
Or a smartphone could be used for audio-only participation so you can have the comfort of a handheld device while you see the participants and are seen by them on a tablet or regular computer. As well, some users may operate two regular computers like a desktop or large laptop computer along with a secondary laptop or 2-in-1 computer.
Support for other device types by videoconferencing platforms
.. or a smart display like this Google-powered Lenovo smart display
Another key trend is for videoconferencing platforms to support devices that aren’t running desktop-platform or mobile-platform operating systems.
This is exemplified by Zoom providing support for popular smart-display platforms like Amazon Echo Show or Google Smart Display. It is although some of the voice-assistant platforms that offer smart displays do support videocall functionality on platforms own by the voice-assistant platform’s developer or one or more other companies they are partnering with.
Or Google providing streaming-vision support for a Google Meet videoconference to a large-screen TV via Chromecast. It is something that could reinvigorate videoconferencing on smart-TV / set-top box platforms, something I stand for so many people like a whole family or household can participate in a videoconference from one end. This is once factors like accessory Webcams, 10-foot “lean-back” user interfaces and the like are worked out.
It can also extend to the idea of voice-assistant platforms extending this to co-opting a smart speaker and a device equipped with a screen and camera to facilitate a videoconference. This could be either with you hearing the videoconference via the smart speaker or the display device’s audio subsystem.
What can be done to make this secure for small accounts?
There can be security and privacy issues with this kind of setup with people away from the premises but operating the same account being able to join in a videoconference uninvited. Similarly, a lot of videoconferencing platforms who offer a service especially to consumers may prefer to offer this feature as part of their paid “business-class” service packages.
One way to make this kind of participation secure for a small account would be to use logical-network verification. This is to make sure that all devices are behind the same logical network (subnet) if there is a want for multiple devices to participate from the same account and in the same videoconference. It may not work well with devices having their own modem such as smartphones, tablets or laptops directly connected to mobile broadband or people plugging USB mobile-broadband modems in to their computers. Similarly, it may not work with public-access or guest-access networks that are properly configured to avoid devices discovering each other on the same network.
Similarly, device-level authentication, which could facilitate password-free login can also be used to authenticate the actual devices operated by an account. A business rule could exist to place a limit on the number of devices of any class but operated by the same consumer account able to concurrently join a videoconference at any one time. This could realistically be taken to five devices allowing for the fact that a couple or family may prefer to operate the same account across all the devices owned by the the members of that group, rather than have members maintain individual accounts just bound .
The idea of allowing concurrent multiple-device support for single accounts in a videoconference platform when it comes to videoconference participation is worth considering. This can be about increased mobility or user comfort or to cater towards the use of newer device types in the context of videoconferencing.
Regularly I keep an eye out for information regarding efforts within Europe to increase their prowess when it comes to business and personal IT services. This is more so as Europe is having to face competition from the USA’s Silicon Valley and from China in these fields.
But what do Europeans stand for?
Airbus have proven that they are a valid European competitor to Boeing in the aerospace field
What Europeans hold dear to their heart when it comes to personal, business and public life are their values. These core values encompass freedom, privacy and diversity and have been build upon experience with their history, especially since the Great Depression.
They had had to deal with the Hitler, Mussolini and Stalin dictatorships especially with Hitler’s Nazis taking over parts of European nations like France and Austria; along with the Cold War era with Eastern Europe under communist dictatorships loyal to the Soviet Union. All these affected countries were run as police states with national security forces conduction mass surveillance of the populace at the behest of the dictators.
I also see this in the context of business through a desire to have access to a properly-functioning competitive market driven by publicly-available standards and specifications. It includes a strong deprecation of bribery, corruption and fraud within European business culture, whether this involves the public sector or not. This is compared to an “at-any-cost” approach valued by the USA and China when it comes to doing business.
As well, the European definition of a competitive market is the availability of goods or services for best value for money. This includes people who are on a very limited budget gaining access to these services in a useable manner that underscores the pluralistic European attitude.
How is this relevant to business and consumer IT?
Nowadays, business and consumer IT is more “service-focused” through the use of online services whether totally free, complementary with the purchase of a device, paid for through advertising or paid for through regular subscription payments. Increasingly these services are being driven by the mass collection of data about the service’s customers or end-users with people describing the data as being the “new oil”.
Examples of this include Web search engines, content hosting providers like YouTube or SoundCloud, subscription content providers, online and mobile gaming services, and voice-driven assistants. It also includes business IT services like cloud-computing services and general hosting providers that facilitate these services.
Europeans see this very differently due to their heritage. Here, they want control over their data along with the ability to participate in a competitive market that works to proper social expectations. This is compared to business models operated by the USA and China that disrespect the “Old World’s” approach to personal and business values.
The European Union have defended these goals but primarily with the “stick” approach. It is typically through passing regulations like the GDPR data-protection regulations or taking legal action against US-based dominant players within this space.
But what needs to happen and what is happening?
What I often want to see happen is European companies build up credible alternatives to what businesses in China and the USA are offering. Here, the various hardware, software and services that Europe has to offer respects the European personal and business culture and values. They also need to offer this same technology to individuals, organisations and jurisdictions who believe in the European values of stable government that respects human rights including citizen privacy and the rule of law.
What is being done within Europe?
Spotify – one of Europe’s success stories
There are some European success stories like Spotify, the “go-to” online subscription service that is based in Sweden as well as a viable French competitor in the form of Deezer, along with SoundCloud which is an audio-streaming service based in Germany.
Candy Crush Saga – a European example of what can be done in the mobile game space
A few of the popular mobile “guilty-pleasure” games like Candy Crush Saga and Angry Birds were developed in Europe. Let’s not forget Ubisoft who are a significant French video games publisher who have set up studios around the world and are one of the most significant household names in video games. Think of game franchiese like Assassin’s Creed or Far Cry which are some of the big-time games that this developer had put out.
Then Qwant appeared as a European-based search engine that creates its own index and stores it within Europe. This is compared to some other European-based search engines which are really “metasearch engines” that concatenate data from multiple search engines including Google and Bing.
There have been a few Web-based email platforms like ProtonMail surfacing out of Switzerland that focus on security and privacy for the end-user. This is thanks to Switzerland’s strong respect for business and citizen privacy especially in the financial world.
The Freebox Delta is an example of a European product running a European voice assistant
There are some European voice assistants surfacing with BMW developing the Intelligent Personal Assistant for in-vehicle use while the highly-competitive telecommunications market in France yielded some voice assistants of French origin thanks to Orange and Free. Spain came in on the act with Movistar offering their own voice assistant. I see growth in this aspect of European IT thanks to the Amazon Voice Interopability Initiative which allows a single hardware device like a smart speaker to allow access to multiple voice-assistant
The AVM FRITZ!Box 7530 is a German example of home network hardware with European heritage
Technicolor, AVM and a few other European companies are creating home network hardware typically in the form of carrier-supplied home-network routers. It is although AVM are offering their Fritz lineup of of home-network hardware through the retail channel with one of these devices being the first home-network router to automatically update itself with the latest patches. In the case of Free.fr, their Freebox products are even heading to the same kind of user interface expected out of a recent Synology or QNAP NAS thanks to the continual effort to add more capabilities in these devices.
But Europe are putting the pedal to the metal when it comes to cloud computing, especially with the goal to assure European sovereignty over data handled this way. Qarnot, a French company, have engaged in the idea of computers that are part of a distributed-computing setup yielding their waste heat from data processing for keeping you warm or allowing you to have a warm shower at home. Now Germany is heading down the direction of a European-based public cloud for European data sovereignty.
There has been significant research conducted by various European institutions that have impacted our online lives. One example is Frauhofer Institute in Germany have contributed to the development of file-based digital audio in both the MP3 and AAC formats. Another group of examples represent efforts by various European public-service broadcasters to effectively bring about “smart radio” with “flagging” of traffic announcements, smart automatic station following, selection of broadcasters by genre or area and display of broadcast-content metadata through the ARI and RDS standards for FM radio and the evolution of DAB+ digital radio.
But what needs to happen and may will be happening is to establish and maintain Europe as a significantly-strong third force for consumer and business IT. As well, Europe needs to expose their technology and services towards people and organisations in other countries rather than focusing it towards the European, Middle Eastern and Northern African territories.
European technology companies would need to offer the potential worldwide customer base something that differentiates themselves from what American and Chinese vendors are offering. Here, they need to focus their products and services towards those customers who place importance on what European personal and business values are about.
What needs to be done at the national and EU level
Some countries like France and Germany implement campaigns that underscore products that are made within these countries. Here, they could take these “made in” campaigns further by promoting services that are built up in those countries and have most of their customers’ data within those countries. Similarly the European Union’s organs of power in Brussels could then create logos for use by IT hardware and software companies that are chartered in Europe and uphold European values.
At the moment Switzerland have taken a proactive step towards cultivating local software-development talent by running a “Best of Swiss Apps” contest. Here, it recognises Swiss app developers who have turned out excellent software for regular or mobile computing platforms. At the moment, this seems to focus on apps which primarily have Switzerland-specific appeal, typically front-ends to services offered by the Swiss public service or companies serving Swiss users.
One goal for Europe to achieve is a particular hardware, software or IT-services platform that can do what Airbus and Arianespace have done with aerospace. This is to raise some extraordinary products that place themselves on the world stage as a viable alternative to what the USA and China offer. As well, it puts the establishment on notice that they have to raise the bar for their products and services.
The Dell XPS 13 series of ultraportable computers uses a combination of Intel integrated graphics and Thunderbolt 3 USB-C ports
Increasingly, laptop users want to make sure their computers earn their keep for computing activities that are performed away from their home or office. But they also want the ability to do some computer activities that demand more from these machines like playing advanced games or editing photos and videos.
What is this about?
Integrated graphics infrastructure like the Intel UHD and Iris Plus GPUs allows your laptop computers to run for a long time on their own batteries. It is thanks to the infrastructure using the system RAM to “paint” the images you see on the screen, along with being optimised for low-power mobile use. This is more so if the computer is equipped with a screen resolution of not more than the equivalent of Full HD (1080p) which also doesn’t put much strain on the computer’s battery capacity.
They may be seen as being suitable for day-to-day computing tasks like Web browsing, email or word-processing or lightweight multimedia and gaming activities while on the road. Even some games developers are working on capable playable video games that are optimised to run on integrated graphics infrastructure so you can play them on modest computer equipment or to while away a long journey.
There are some “everyday-use” laptop computers that are equipped with a discrete graphics processor along with the integrated graphics, with the host computer implementing automatic GPU-switching for energy efficiency. Typically the graphics processor doesn’t really offer much for performance-grade computing because it is a modest mobile-grade unit but may provide some “pep” for some games and multimedia tasks.
Thunderbolt 3 connection on a Dell XPS 13 2-in-1
But if your laptop has at least oneThunderbolt 3 USB-C port along with the integrated graphics infrastructure, it will open up another option. Here, you could use an external graphics module, also known as an eGPU unit, to add high-performance dedicated graphics to your computer while you are at home or the office. As well, these devices provide charging power for your laptop which, in most cases, would relegate the laptop’s supplied AC adaptor as an “on-the-road” or secondary charging option.
A use case often cited for this kind of setup is a university student who is studying on campus and wants to use the laptop in the library to do their studies or take notes during classes. They then want to head home, whether it is at student accommodation like a dorm / residence hall on the campus, an apartment or house that is shared by a group of students, or their parents’ home where it is within a short affordable commute from the campus. The use case typifies the idea of the computer being able to support gaming as a rest-and-recreation activity at home after all of what they need to do is done.
Razer Core external graphics module with Razer Blade gaming laptop
Here, the idea is to use the external graphics module with the computer and a large-screen monitor have the graphics power come in to play during a video game. As well, if the external graphics module is portable enough, it may be about connecting the laptop to a large-screen TV installed in a common lounge area at their accommodation on an ad-hoc basis so they benefit from that large screen when playing a game or watching multimedia content.
The advantage in this use case would be to have the computer affordable enough for a student at their current point in life thanks to it not being kitted out with a dedicated graphics processor that may be seen as being hopeless. But the student can save towards an external graphics module of their choice and get that at a later time when they see fit. In some cases, it may be about using a “fit-for-purpose” graphics card like an NVIDIA Quadro with the eGPU if they maintain interest in that architecture or multimedia course.
It also extends to business users and multimedia producers who prefer to use a highly-portable laptop “on the road” but use an external graphics module “at base” for those activities that need extra graphics power. Examples of these include to render video projects or to play a more-demanding game as part of rest and relaxation.
Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables
There are a few small external graphics modules that are provided with a soldered-in graphics processor chip. These units, like the Sonnet Breakaway Puck, are small enough to pack in your laptop bag, briefcase or backpack and can be seen as an opportunity to provide “improved graphics performance” when near AC power. There will be some limitations with these devices like a graphics processor that is modest by “desktop gaming rig” or “certified workstation” standards; or having reduced connectivity for extra peripherals. But they will put a bit of “pep” in to your laptop’s graphics performance at least.
Some of these small external graphics modules would have come about as a way to dodge the “crypto gold rush” where traditional desktop-grade graphics cards were very scarce and expensive. This was due to them being used as part of cryptocurrency mining rigs to facilitate the “mining” of Bitcoin or Ethereum during that “gold rush”. The idea behind these external graphics modules was to offer enhanced graphics performance for those of us who wanted to play games or engage in multimedia editing rather than mine Bitcoin.
Who is heading down this path?
At the moment, most computer manufacturers are configuring a significant number of Intel-powered ultraportable computers along these lines i.e. with Intel integrated graphics and at least one Thunderbolt 3 port. A good example of this are the recent iterations of the Dell XPS 13 (purchase here) and some of the Lenovo ThinkPad X1 family like the ThinkPad X1 Carbon.
Of course some of the computer manufacturers are also offering laptop configurations with modest-spec discrete graphics silicon along with the integrated-graphics silicon and a Thunderbolt 3 port. This is typically pitched towards premium 15” computers including some slimline systems but these graphics processors may not put up much when it comes to graphics performance. In this case, they are most likely to be equivalent in performance to a current-spec baseline desktop graphics card.
The Thunderbolt 3 port on these systems would be about using something like a “card-cage” external graphics module with a high-performance desktop-grade graphics card to get more out of your games or advanced applications.
Trends affecting this configuration
The upcoming USB4 specification is meant to be able to bring Thunderbolt 3 capability to non-Intel silicon thanks to Intel assigning the intellectual property associated with Thunderbolt 3 to the USB Implementers Forum.
As well, Intel has put forward the next iteration of the Thunderbolt specification in the form of Thunderbolt 4. It is more of an evolutionary revision in relationship to USB4 and Thunderbolt 3 and will be part of their next iteration of their Core silicon. But it is also intended to be backwards compatible with these prior standards and uses the USB-C connector.
What can be done to further legitimise Thunderbolt 3 / USB4 and integrated graphics as a valid laptop configuration?
What needs to happen is that the use case for external graphics modules needs to be demonstrated with USB4 and subsequent technology. As well, this kind of setup needs to appear on AMD-equipped computers as well as devices that use silicon based on ARM microarchitecture, along with Intel-based devices.
Personally, I would like to see the Thunderbolt 3 or USB4 technology being made available to more of the popularly-priced laptops made available to householders and small businesses. It would be with an ideal to allow the computer’s user to upgrade towards better graphics at a later date by purchasing an external graphics module.
This is in addition to a wide range of external graphics modules available for these computers with some capable units being offered at affordable price points. I would also like to see more of the likes of the Lenovo Legion BoostStation “card-cage” external graphics module that have the ability for users to install storage devices like hard disks or solid-state drives in addition to the graphics card. Here, these would please those of us who want extra “offload” storage or a “scratch disk” just for use at their workspace. They would also help people who are moving from the traditional desktop computer to a workspace centred around a laptop.
The validity of a laptop computer being equipped with a Thunderbolt 3 or similar port and an integrated graphics chipset is to be recognised. This is more so where the viability of improving on one of these systems using an external graphics module that has a fit-for-purpose dedicated graphics chipset can be considered.
Lenovo Flex 5G / Yoga 5G convertible notebook which runs Windows on Qualcomm ARM silicon – the first laptop computer to have 5G mobile broadband on board
Increasingly, regular computers are moving towards the idea of having processor power based around either classic Intel (i86/i64) or ARM RISC microarchitectures. This is being driven by the idea of portable computers heading towards the latter microarchitecture as a power-efficiency measure with this concept driven by its success with smartphones and tablets.
It is undertaking a different approach to designing silicon, especially RISC-based silicon, where different entities are involved in design and manufacturing. Previously, Motorola was taking the same approach as Intel and other silicon vendors to designing and manufacturing their desktop-computing CPUs and graphics infrastructure. Now ARM have taken the approach of designing the microarchitecture themselves and other entities like Samsung and Qualcomm designing and fabricating the exact silicon for their devices.
Apple to move the Macintosh platform to their own ARM RISC silicon
As well, the Linux community have established Linux-based operating systems on ARM microarchitectore. This has led to Google running Android on ARM-based mobile and set-top devices and offering a Chromebook that uses ARM silicon; along with Apple implementing it in their operating systems. Not to mention the many NAS devices and other home-network hardware that implement ARM silicon.
Initially the RISC-based computing approach was about more sophisticated use cases like multimedia or “workstation-class” computing compared to basic word-processing and allied computing tasks. Think of the early Apple Macintosh computers, the Commodore Amiga with its many “demos” and games, or the RISC/UNIX workstations like the Sun SPARCStation that existed in the late 80s and early 90s. Now it is about power and thermal efficiency for a wide range of computing tasks, especially where portable or low-profile devices are concerned.
Already mobile and set-top devices use ARM silicon
I will see an expectation for computer operating systems and application software to be written and compiled for both classic Intel i86 and ARM RISC microarchitectures. This will require software development tools to support compiling and debugging on both platforms and, perhaps, microarchitecture-agnostic application-programming approaches. It is also driven by the use of ARM RISC microarchitecture on mobile and set-top/connected-TV computing environments with a desire to allow software developers to have software that is useable across all computing environments.
.. as do a significant number of NAS units like this WD MyCloud EX4100 NAS
Some software developers, usually small-time or bespoke-solution developers, will end up using “managed” software development environments like Microsoft’s .NET Framework or Java. These will allow the programmer to turn out a machine-executable file that is dependent on pre-installed run-time elements for it to run. These run-time elements will be installed in a manner that is specific to the host computer’s microarchitecture and make use of the host computer’s needs and capabilities. These environments may allow the software developer to “write once run anywhere” without knowing if the computer the software is to run on uses an i86 or ARM microarchitecture.
There may also be an approach towards “one-machine two instruction-sets” software development environments to facilitate this kind of development where the goal is to simply turn out a fully-compiled executable file for both instruction sets.
It could be in an accepted form like run-time emulation or machine-code translation as what is used to allow MacOS or Windows to run extant software written for different microarchitectures. Or one may have to look at what went on with some early computer platforms like the Apple II where the use of a user-installable co-processor card with the required CPU would allow the computer to run software for another microarchitecture and platform.
Computer Hardware Vendors
For computer hardware vendors, there will be an expectation towards positioning ARM-based silicon towards high-performance power-efficient computing. This may be about highly-capable laptops that can do a wide range of computing tasks without running out of battery power too soon. Or “all-in-one” and low-profile desktop computers will gain increased legitimacy when it comes to high-performance computing while maintaining the svelte looks.
Personally, if ARM-based computing was to gain significant traction, it may have to be about Microsoft encouraging silicon vendors other than Qualcomm to offer ARM-based CPUs and graphics processors fit for “regular” computers. As well, Microsoft and the Linux community may have to look towards legitimising “performance-class” computing tasks like “core” gaming and workstation-class computing on that microarchitecture.
There may be the idea of using 64-bit i86 microarchitecture as a solution for focused high-performance work. This may be due to a large amount of high-performance software code written to run with the classic Intel and AMD silicon. It will most likely exist until a significant amount of high-performance software is written to run natively with ARM silicon.
Thanks to Apple and Microsoft heading towards ARM RISC microarchitecture, the computer hardware and software community will have to look at working with two different microarchitectures especially when it comes to regular computers.
Of late, the personal-IT press have identified a 13” ultraportable laptop computer that has set a benchmark when it comes to consumer-focused computers of that class. This computer is the Dell XPS 13 family of Ultrabooks which are a regular laptop computer family that runs Windows and is designed for portability.
What makes these computers special?
A key factor about the way Dell had worked on the XPS 13 family of Ultrabooks was to make sure the ultraportable laptops had the important functions necessary for this class of computer. They also factored in the durability aspect because if you are paying a pretty penny for a computer, you want to be sure it lasts.
As well, it was all part of assuring that the end-user got value for money when it came to purchasing an ultraportable laptop computer.
In a previous article that I wrote about the Dell XPS 13, I compared it to the National Panasonic mid-market VHS videocassette recorders offered since the mid 1980s to the PAL/SECAM (Europe, Australasia, Asia) market; and the Sony mid-market MiniDisc decks offered through the mid-late 1990s. Both these product ranges were worked with the focus on offering the features and performance that count for most users at a price that offers value for money and is “easy to stomach”.
Through the generations, Dell introduced the very narrow bezel for the screen but this required the typical camera module to be mounted under the screen. That earnt some criticism in the computing press due to it “looking up at the user’s nose”. For the latest generation, Dell developed a very small camera module that can exist at the top of the screen but maintain the XPS 13’s very narrow bezel.
The Dell XPS 13 Kaby Lake 2-in-1 convertible Ultrabook variant
The Dell XPS 13 is able to be specified with the three different Intel Core CPU grades (i3, i5 and i7) and users could specify it to be equipped with a 4K UHD display option. The ultraportable laptop will have Intel integrated graphics infrastructure but the past two generations of the Dell XPS 13 are equipped with two Thunderbolt 3 ports so you can use it with an external graphics module if you want improved graphics performance.
There was some doubt about Dell introducing a 2-in-1 convertible variant of the XPS 13 due to it being perceived as a gimmick rather than something that is of utility. But they introduced the convertible variant of this Ultrabook as part of the 2017 Kaby Lake generation. It placed Dell in a highly-competitive field of ultraportable convertible computers and could easily place a focus towards “value-focused” 2-in-1 ultraportables.
What will this mean for Dell and the personal computer industry?
Thin Webcam circuitry atop display rectifies the problem associated with videocalls made on the Dell XPS 13
The question that will come about is how far can Dell go towards improving this computer. At the moment, it could be about keeping each generation of the XPS 13 Ultrabook in step with the latest mobile-focused silicon and mobile-computing technologies. They could also be ending up with a 14” clamshell variant of this computer for those of us wanting a larger screen size for something that comfortably fits on the economy-class airline tray table.
For the 2-in-1 variant, Dell could even bring the XPS 13 to a point where it is simply about value for money compared to other 13” travel-friendly convertible ultraportables. Here, they would underscore the features that every user of that class of computer needs, especially when it comes to “on-the-road” use, along with preserving a durable design.
Other computer manufacturers will also be looking at the Dell XPS 13 as the computer to match, if not beat, when it comes to offering value for money in their 13” travel-friendly clamshell ultraportable range. This can include companies heavily present in particular market niches like enterprise computing who will use what Dell is offering and shoehorn it to their particular niche.
Best value configuration suggestions
Most users could get by with a Dell XPS 13 that uses an Intel Core i5 CPU, 8Gb RAM and at least 256Gb solid-state storage. You may want to pay more for an i7 CPU and/or 16Gb RAM if you are chasing more performance or to spend more on a higher storage capacity if you are storing more data while away.
If there is an expectation to use your XPS 13 on the road, it would be wise to avoid the 4K UHD screen option due to the fact that this resolution could make your Ultrabook more thirstier to run on its own battery.
The 2-in-1 convertible variant is worth considering if you are after this value-priced ultraportable in a “Yoga-style” convertible form.
What I have found through my experience with the Dell XPS 13 computers along with the computer-press write-ups about them is that Dell has effectively defined a benchmark when it comes to an Intel-powered travel-friendly ultraportable laptop computer.
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.