Category: Commentary

How about having an up-to-date recovery image on your computer

Dell XPS 13 press picture courtesy of Dell Australia

You need to have access to the latest data representing your computer’s operating system, device drivers and allied software from its manufacturer as a recovery image to simplify any repair / restore efforts or to get your “new toy” up and running as quickly and smoothly as possible.

Recent computers that run MacOS or Windows now come with a partition on their hard disk or SSD that has a copy of the operating system and other software they come with the computer “out of the box”. Or there is the ability to download a recovery image for your computer from the manufacturer’s Website using a manufacturer-supplied app.

It is in lieu of the previous method of delivering an optical disc with the computer that has the operating system and other manufacturer0-supplied software thanks to newer computers not being equipped with optical drives.

Here, this recovery data comes in to play if the operating system fails and you have to reinstate it from a known good copy. An example of this could be the computer being taken over by malware or you need to get it back to “ground zero” before relinquishing it. Or the system disk (hard disk or SSD) fails and you have to put the operating system on a new system disk as part of reconstructing your computing environment.

But Microsoft, Apple and the hardware manufacturers associated with your computer’s internal peripherals update their software regularly as part of their software quality control. There are often the feature updates that add functionality or implement newer device-class drivers that are part of an operating system’s lifecycle.

What typically happens is this recovery image represents the software that came with your computer when it left the factory. It doesn’t include all the newer updates and revisions that took place. Here, if you have had to restore the operating system from that recovery image, you will then have to download the updates from your computer’s manufacturer, the operating system vendor or other software developers to have your computer up-to-date.

The firmware / BIOS updates may not matter due to them being delivered as a “download-to-install” package. This means that when these packages are run, they verify and shift the necessary firmware code to the BIOS / UEFI subsystem for the computer or the firmware subsystems for peripherals supported by the computer’s manufacturer, then subsequently commence and install the installation process.

Questions that can be raised include whether the factory-supplied data should be maintained as the definitive “reference data” for your system. Or whether the computer manufacturer is to provide a means to keep the software up-to-date with the latest versions for your computer.

This will be an issue with manufacturers who prefer to customise the software drivers that run hardware associated with their computer products while end-users prefer to run the latest software drivers offered by the hardware’s manufacturer. This is typically due to the hardware manufacturer’s code being updated more frequently and is of concern with display chipsets like Intel’s integrated-graphics chipsets.

Similarly there is the issue that people are likely to change the software edition that comes with their computer like upgrading to a “Pro” edition of the Windows operating system when the computer came with the Home edition.

An approach that a manufacturer can take over a computer system’s lifetime is to revise the definitive “reference data” set for that system. This could be undertaken when the operating system undergoes a major revision like a feature update. This can be about taking stock of the device drivers and updating them to newer stable code as part of offering the latest “reference data” set.

That allows a user who is doing an operating-system recovery doesn’t need to hunt for and download updates as part of this process if they want the computer running the latest code.

This kind of approach can also come in to its own during the time that the computer system is on the market. It means that during subsequent years, newer computer units receive the latest software updates before they leave the factory. This is so that the computer’s end-user or corporate IT department don’t have to download the latest versions of the operating system, device drivers and other software as part of commissioning their new computer system.

The idea of computer manufacturers keeping their products’ software-recovery data current will benefit all of us whether we are buying that new computer and want to get that “new toy” running or need to reinstate the operating software in our computers due to hardware or software failures.

Send to Kindle

Chapter marking within podcasts

Android main interactive lock screen

Smartphones are facilitating our listenership to podcasts

As we listen to more spoken-word audio content in the form of podcasts and the like, we may want to see this kind of audio content easily delineated in a logical manner. For that matter, such content is being listened to as we drive or walk thanks to the existence of car and personal audio equipment including, nowadays, the “do-it-all” smartphones being connected to headphones or car stereos.

This may be to return to the start of a segment if we were interrupted so we really know where we are contextually. Or it could be to go to a particular “article” in a magazine-style podcast if we are after just that article.

Prior attempts to delineate spoken-word content

In-band cue marking on cassette

Some people who distributed cassette-based magazine-style audio content, typically to vision-impaired people, used mixed-in audio marking recorded at high speed to allow a user to find articles on a tape.

This worked with tape players equipped with cue and review functionality, something that was inconsistently available. Such functionality, typically activated when you held down the fast-forward or rewind buttons while the tape player was in play mode, allowed the tape to be ran forward or backward at high speed while you were able to hear what’s recorded but in a high-pitch warbling tone.

With this indexing approach, you would hear a reference tone that delineated the start of the segment in either direction. But if you used the “cue” button to seek through the tape, you would also hear an intelligible phrase that identified the segment so you knew where you were.

Here, this function was dependent on whether the tape player had cue and review operation and required the user to hold down the fast-wind buttons for it to be effective. This ruled out use within car-audio setups that required the use of locking fast-wind controls for safe operation.

Index Marking on CDs

The original CD Audio standard had inherent support for index marking that was subordinate to the track markers typically used to delineate the different songs or pieces. This was to delineate segments within a track such as variations within a classical piece.

Most 1980s-era CD players of the type that connected to your hi-fi system supported this functionality. This was more so with premium-level models and how they treated this function was markedly different. The most basic implementation of this feature was to show the index number on the display after the track number. CD players with eight-digit displays showed the index number as a smaller-sized number after the track number while those with a four or six-digit display had you press the display button to show the track number and index number.

Better implementations had the ability to step between the index marks with this capability typically represented by an extra pair of buttons on the player’s control surface labelled “INDEX”. Some more sophisticated CD players even had direct access to particular index numbers within a track or could allow you to program an index number within a track as part of a user-programmed playlist.

As well, some CDs, usually classical-music discs which feature long instrumental works that are best directly referenced at significant points made use of this feature. Support for this feature died out by the 1990s with this feature focused on marking the proper start of a song. It was considered of importance with live recordings or concept albums where a song or instrumental piece would segue in to another one. This was of importance for the proper implementation of repeat, random (shuffle) play or programmed-play modes so that the song or piece comes in at the proper start.

There was an interest in spoken-word material on CD through the late 1990s with the increase in the number of car CD players installed in cars. This was typically in the form of popular audiobooks or foreign-language courseware and car trips were considered a favourite location for listening to such content. But these spoken-word CDs were limited to using tracks to delineate chapters in a book or lessons within a foreign-language course.

But CD-R with the ability to support on-site short-run replication of limited-appeal content opened the door for content like religious sermons or talks to appear on the CD format. This technology effectively “missed the boat” when it came to support for index marking and most CD-burning software didn’t allow you to place index marks within a track.

The podcast revolution

File-based digital audio and the Internet opened the door to regularly-delivered spoken-word audio content in the form of podcasts. These are effectively a radio show that is in an audio file available to download. They even use RSS Webfeeds to allow listeners to follow podcasts for newer episodes.

Here, podcast-management or media-management software automatically downloads or enqueues podcast episodes for subsequent listening, marking what is listened to as “listened”. Some NAS-based DLNA servers can be set up to follow podcasts and download them to the NAS hard disk as new content, creating a UPnP-AV/DLNA content tree out of these podcasts available to any DLNA-compliant media playback device.

The podcast has gained a strong appeal with small-time content creators who want to create what is effectively their own radio shows without being encumbered by the rules and regulations of broadcasting or having to see radio stations as content gatekeepers.

The podcast has also appealed to radio stations in two different ways. Firstly, it has allowed the station’s talent to have their spoken-word content they broadcast previously available for listeners to hear again at a later time.

It also meant that the station’s talent could create supplementary audio content that isn’t normally broadcast but available for their audience, thus pushing their brand and that of the station further. This includes the creation of frequently-published short-form “snack-sized” content that may allow for listening during short journeys for example.

Secondly a talk-based radio station could approach a podcaster and offer to syndicate their podcast. That is to pay for the right to broadcast the podcast on their radio station in to the station’s market. It would appeal to radio stations having programming that fills in schedule gaps like the overnight “graveyard shift”, weekends or summer holidays while their regular talent base isn’t available. But it can also be used as a way to put a rising podcast star “on the map” before considering whether to have them behind the station’s microphone.

Why chapter marking within podcasts?

A lot of podcast authors typically ran their shows in a magazine form, perhaps with multiple articles or segments within the same podcast. As well, whenever one gave a talk or sermon, they would typically break it down in to points to make it clear to their audience to know where they are. But the idea of delineating within an audio file hasn’t been properly worked out.

This can benefit listeners who are after a particular segment especially within a magazine-style podcast. Or a listener could head back to the start of a logical point in the podcast when they resume listening so they effectively know where they are at contextually.

This can also appeal to ad-supported podcast directories like Spotify who use radio-style audio advertising and want to insert ads between articles or sections of a podcast. The same applies to radio stations who wish to syndicate podcasts. Here they would need to pause podcasts to insert local time and station-identity calls and, in some cases, local advertising spots or news bulletins.

Is this feasible?

The ID3 2 standard which carries metadata for most audio file formats including MP3, AAC and FLAC supports chapter marking within the audio file. It is based around a file-level “table of contents” which determine each audio chapter and can even have textual and graphical descriptions for each chapter.

There is also support for hierarchical table of contents like a list of “points” within each content segment as well as an overall list of content segments. Each of the “table of contents” has a bit that can indicate whether to have each chapter in that “table of contents” played in order or whether they can be played individually. That could be used by an ad-supported podcast directory or broadcast playout program to insert local advertising between entries or not.

What is holding it back?

The main problem with utilising the chapter markers supported within ID3.2 is the lack of proper software support both at the authoring and playback ends of the equation.

Authoring software available to the average podcaster provides inconsistent and non-intuitive support for placing chapter markers within a podcast. This opens up room for errors when authoring that podcast and enabling chapter marking therein.

As well, very few podcast manager and media player programs recognise these chapter markers and provide the necessary navigation functionality. This could be offered at least by having chapter locations visible as tick marks on the seek-bar in the software’s user interface and, perhaps allowing you to hold-down the cue and review buttons to search at the previous or next chapter.

Better user interfaces could list out chapters within a podcast so users can know “what they are up to” while listening or to be able to head to the segment that matters in that magazine-style podcast.

Similarly, the podcast scene needs to know the benefits of chapter-marking a podcast. In an elementary form, marking out a TED Talk, church sermon or similar speech at each key point can be beneficial. For example, a listener could simply recap a point they missed due to being distracted thus getting more value out of that talk. If the podcast has a “magazine” approach with multiple segments, the listener may choose to head to a particular segment that interests them.

Conclusion

The use of chapter marking within podcasts and other spoken-word audio content could make this kind of content easier to deal with for most listeners. Here, it is more about searching for a particular segment within the podcast or beading back to the start of a significant point therein if you were interrupted so you can hear that point in context.

Send to Kindle

How regional next-generation infrastructure providers enable competitive Internet service

Previous Coverage

Gigaclear fibre-optic cable - picture courtesy of Gigaclear

Gigaclear – laying their own fibre-to-the-premises within a rural area in the UK

What is happening with rural broadband in the UK

Further Comments

In some countries like the UK, Australia and Germany, regional broadband infrastructure providers set up shop to provide next-generation broadband to a particular geographic area within a country.

This is used to bring next-generation broadband technology like fibre-to-the-premises to homes and businesses within that geographic area. But let me remind you that fibre-to-the-premises isn’t the only medium they use — some of them use fixed wireless or a fibre-copper setup like HFC cable-modem technology or fibre + Ethernet-cable technology. But they aren’t using the established telephone network at all thus they stay independent of the incumbent infrastructure provider and, in some areas like rural areas, that provider’s decrepit “good enough to talk, not good enough for data” telephone wiring.

In the UK especially, most of these operators will target a particular kind of population centre like a rural village cluster (Gigaclear, B4RN, etc), a large town or suburb (Zzoom), city centres (Cityfibre, Hyperoptic, etc) or even just greenfield developments. Some operators set themselves up in multiple population centres in order to get them wired up for the newer technology but all of the operators will work on covering the whole of that population centre, including its outskirts.

This infrastructure may be laid ahead of the incumbent traditional telco or infrastructure operator like Openreach, NBN or Deutsche Telekom or it may be set up to provide a better Internet service than what is being offered by the incumbent operator. But it is established and maintained independently of the incumbent operator.

Internet service offerings

Typically the independent regional broadband infrastructure providers run a retail Internet-service component available to households and small businesses in that area and using that infrastructure. The packages are often pitched to offer more value for money than what is typically offered in that area thanks to the infrastructure that the provider controls.

But some nations place a competitive-market requirement on these operators to offer wholesale Internet service to competing retail ISPs, with this requirement coming in to force when they have significant market penetration.That is usually assessed by the number of actual subscribers who are connected to the provider’s Internet service or the number of premises that are passed by the operator’s street-level infrastructure. In addition, some independent regional infrastructure providers offer wholesale service earlier as a way to draw in more money to increase their footprint.

This kind of wholesale internet service tends to be facilitated by special wholesale Internet-service markets that these operators are part of. Initially this will attract boutique home and small-business Internet providers who focus on particular customer niches. But some larger Internet providers may prefer to take an infrastructure-agnostic approach, offering mainstream retail Internet service across multiple regional service providers.

Support by local and regional government

Local and regional governments are more likely to provide material and other support to these regional next-generation infrastructure operators. This is to raise their municipality’s or region’s profile as an up-to-date community to live or do business within. It is also part of the “bottom-up” approach that these operators take in putting themselves on the map.

In a lot of cases, the regional next-generation infrastructure providers respond to tenders put forward by local and regional governments. This is either to provide network and Internet service for the government’s needs or to “wire up” the government’s are of jurisdiction or a part thereof for next-generation broadband.

Legislative requirements

There will have to be legislative enablers put forward by national and regional governments to permit the creation and operation of regional next-generation broadband network infrastructure. This could include the creation and management of wholesale-broadband markets to permit retail-Internet competition.

There is also the need to determine how much protection a small regional infrastructure operator needs against the incumbent or other infrastructure operators building over their infrastructure with like offerings. This may be about assuring the small operator sufficient market penetration in their area before others come along and compete, along with providing an incentive to expand in to newer areas.

It will also include issues like land use and urban planning along with creation and maintenance of rights-of-way through private, regulated or otherwise encumbered land for such use including competitors’ access to these rights-of-way.

That also extends to access to physical infrastructure like pits, pipes and poles by multiple broadband service providers, especially where an incumbent operator has control over that infrastructure. It can also extend to use of conduits or dark fibre installed along rail or similar infrastructure expressly for the purpose of creating data-communications paths.

That issue can also extend to how multiple-premises buildings and developments like shopping centres, apartment blocks and the like are “wired up” for this infrastructure. Here, it can be about allowing or guaranteeing right of access to these developments by competing service providers and how in-building infrastructure is provided and managed.

The need for independent regional next-generation broadband infrastructure

But if an Internet-service market is operating in a healthy manner offering value-for-money Internet service like with New Zealand there may not be a perceived need to allow competing regional next-generation infrastructure to exist.

Such infrastructure can be used to accelerate the provision of broadband within rural areas, provide different services like simultanaeous-bandwidth broadband service for residential users or increase the value for money when it comes to Internet service. Here, the existence of this independent infrastructure with retail Internet services offered through it can also be a way to keep the incumbent service operator in check.

Send to Kindle

Should videoconference platforms support multiple devices concurrently

Zoom (MacOS) multi-party video conference screenshot

The idea of a Zoome or similar platform user joining the same videoconferences frp, multiple devices could be considered in some cases

Increasing when we use a videoconferencing platform, we install the client software associated with it on all the computing devices we own. Then we log in to our account associated with that platform so we can join videoconferences from whatever device we have and suits our needs.

But most of these platforms allow a user to use one device at a time to participate in the same videoconference. Zoom extends on this by allowing concurrent use of devices of different types (smartphone, mobile-platform tablet or regular computer) by the same user account on the same conference.

But why support the concurrent use of multiple devices?

There are some use cases where multiple devices used concurrently may come in handy.

Increased user mobility

Dell Inspiron 14 5000 2-in-1 - viewer arrangement at Rydges Melbourne (Locanda)

especially with tablet computers and 2-in-1s located elsewhere

One of these is to assure a high level of mobility while participating in a videoconference. This may be about moving between a smartphone that is in your hand and a tablet or laptop that is at a particular location like your office.

It can also be about joining the same videoconference from other devices that are bound to the same account. This could be about avoiding multiple people crowding around one computing device to participate in a videoconference from their location, which can lead to user discomfort or too many people appearing in one small screen in a “tile-up” view of a multiparty videoconference. Or it can be about some people participating in a videoconference from an appropriate room like a lounge area or den.

Lenovo Yoga Tablet 2 tablet

like in a kitchen with this Lenovo Yoga Tab Android tablet

Similarly, one or more users at the same location may want to simply participate in the videoconference in a passive way but not be in the presence of others who are actively participating in the same videoconference. This may simply be to monitor the call as it takes place without the others knowing. Or it could be to engage in another activity like preparing food in the kitchen while following the videocall.

As far as devices go, there may be the desire to use a combination of devices that have particular attributes to get the most out of the videocall. For example, it could be about spreading a large videoconference across multiple screens such as having a concurrent “tile-up” view, active speaker and supporting media across three screens.

Or a smartphone could be used for audio-only participation so you can have the comfort of a handheld device while you see the participants and are seen by them on a tablet or regular computer. As well, some users may operate two regular computers like a desktop or large laptop computer along with a secondary laptop or 2-in-1 computer.

Support for other device types by videoconferencing platforms

.. or a smart display like this Google-powered Lenovo smart display

Another key trend is for videoconferencing platforms to support devices that aren’t running desktop-platform or mobile-platform operating systems.

This is exemplified by Zoom providing support for popular smart-display platforms like Amazon Echo Show or Google Smart Display. It is although some of the voice-assistant platforms that offer smart displays do support videocall functionality on platforms own by the voice-assistant platform’s developer or one or more other companies they are partnering with.

Or Google providing streaming-vision support for a Google Meet videoconference to a large-screen TV via Chromecast. It is something that could reinvigorate videoconferencing on smart-TV / set-top box platforms, something I stand for so many people like a whole family or household can participate in a videoconference from one end. This is once factors like accessory Webcams, 10-foot “lean-back” user interfaces and the like are worked out.

It can also extend to the idea of voice-assistant platforms extending this to co-opting a smart speaker and a device equipped with a screen and camera to facilitate a videoconference.  This could be either with you hearing the videoconference via the smart speaker or the display device’s audio subsystem.

What can be done to make this secure for small accounts?

There can be security and privacy issues with this kind of setup with people away from the premises but operating the same account being able to join in a videoconference uninvited. Similarly, a lot of videoconferencing platforms who offer a service especially to consumers may prefer to offer this feature as part of their paid “business-class” service packages.

One way to make this kind of participation secure for a small account would be to use logical-network verification. This is to make sure that all devices are behind the same logical network (subnet) if there is a want for multiple devices to participate from the same account and in the same videoconference. It may not work well with devices having their own modem such as smartphones, tablets or laptops directly connected to mobile broadband or people plugging USB mobile-broadband modems in to their computers. Similarly, it may not work with public-access or guest-access networks that are properly configured to avoid devices discovering each other on the same network.

Similarly, device-level authentication, which could facilitate password-free login can also be used to authenticate the actual devices operated by an account. A business rule could exist to place a limit on the number of devices of any class but operated by the same consumer account able to concurrently join a videoconference at any one time. This could realistically be taken to five devices allowing for the fact that a couple or family may prefer to operate the same account across all the devices owned by the the members of that group, rather than have members maintain individual accounts just bound .

Conclusion

The idea of allowing concurrent multiple-device support for single accounts in a videoconference platform when it comes to videoconference participation is worth considering. This can be about increased mobility or user comfort or to cater towards the use of newer device types in the context of videoconferencing.

Send to Kindle

Why do I defend Europe creating their own tech platforms?

Previous Coverage on HomeNetworking01.info Map of Europe By User:mjchael by using preliminary work of maix¿? [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons

Europeans could compete with Silicon Valley when offering online services

How about encouraging computer and video games development in Europe, Oceania and other areas

My Comments

Regularly I keep an eye out for information regarding efforts within Europe to increase their prowess when it comes to business and personal IT services. This is more so as Europe is having to face competition from the USA’s Silicon Valley and from China in these fields.

But what do Europeans stand for?

Airbus A380 superjumbo jet wet-leased by HiFly at Paris Air Show press picture courtesy of Airbus

Airbus have proven that they are a valid European competitor to Boeing in the aerospace field

What Europeans hold dear to their heart when it comes to personal, business and public life are their values. These core values encompass freedom, privacy and diversity and have been build upon experience with their history, especially since the Great Depression.

They had had to deal with the Hitler, Mussolini and Stalin dictatorships especially with Hitler’s Nazis taking over parts of European nations like France and Austria; along with the Cold War era with Eastern Europe under communist dictatorships loyal to the Soviet Union. All these affected countries were run as police states with national security forces conduction mass surveillance of the populace at the behest of the dictators.

The EU’s European Parliament summed this up succinctly on their page with Europeans placing value on human dignity, human rights, freedom, democracy, equality and the rule of law. It is underscored in a pluralistic approach with respect for minority groups.

I also see this in the context of business through a desire to have access to a properly-functioning competitive market driven by publicly-available standards and specifications. It includes a strong deprecation of bribery, corruption and fraud within European business culture, whether this involves the public sector or not. This is compared to an “at-any-cost” approach valued by the USA and China when it comes to doing business.

As well, the European definition of a competitive market is the availability of goods or services for best value for money. This includes people who are on a very limited budget gaining access to these services in a useable manner that underscores the pluralistic European attitude.

How is this relevant to business and consumer IT?

Nowadays, business and consumer IT is more “service-focused” through the use of online services whether totally free, complementary with the purchase of a device, paid for through advertising or paid for through regular subscription payments. Increasingly these services are being driven by the mass collection of data about the service’s customers or end-users with people describing the data as being the “new oil”.

Examples of this include Web search engines, content hosting providers like YouTube or SoundCloud, subscription content providers, online and mobile gaming services, and voice-driven assistants. It also includes business IT services like cloud-computing services and general hosting providers that facilitate these services.

Europeans see this very differently due to their heritage. Here, they want control over their data along with the ability to participate in a competitive market that works to proper social expectations. This is compared to business models operated by the USA and China that disrespect the “Old World’s” approach to personal and business values.

The European Union have defended these goals but primarily with the “stick” approach. It is typically through passing regulations like the GDPR data-protection regulations or taking legal action against US-based dominant players within this space.

But what needs to happen and what is happening?

What I often want to see happen is European companies build up credible alternatives to what businesses in China and the USA are offering. Here, the various hardware, software and services that Europe has to offer respects the European personal and business culture and values. They also need to offer this same technology to individuals, organisations and jurisdictions who believe in the European values of stable government that respects human rights including citizen privacy and the rule of law.

What is being done within Europe?

Spotify Windows 10 Store port

Spotify – one of Europe’s success stories

There are some European success stories like Spotify, the “go-to” online subscription service that is based in Sweden as well as a viable French competitor in the form of Deezer, along with SoundCloud which is an audio-streaming service based in Germany.

Candy Crush Saga gameplay on Windows 10

Candy Crush Saga – a European example of what can be done in the mobile game space

A few of the popular mobile “guilty-pleasure” games like Candy Crush Saga and Angry Birds were developed in Europe. Let’s not forget Ubisoft who are a significant French video games publisher who have set up studios around the world and are one of the most significant household names in video games. Think of game franchiese like Assassin’s Creed  or Far Cry which are some of the big-time games that this developer had put out.

Then Qwant appeared as a European-based search engine that creates its own index and stores it within Europe. This is compared to some other European-based search engines which are really “metasearch engines” that concatenate data from multiple search engines including Google and Bing.

There have been a few Web-based email platforms like ProtonMail surfacing out of Switzerland that focus on security and privacy for the end-user. This is thanks to Switzerland’s strong respect for business and citizen privacy especially in the financial world.

Freebox Delta press photo courtesy of Iliad (Free.fr)

The Freebox Delta is an example of a European product running a European voice assistant

There are some European voice assistants surfacing with BMW developing the Intelligent Personal Assistant for in-vehicle use while the highly-competitive telecommunications market in France yielded some voice assistants of French origin thanks to Orange and Free. Spain came in on the act with Movistar offering their own voice assistant. I see growth in this aspect of European IT thanks to the Amazon Voice Interopability Initiative which allows a single hardware device like a smart speaker to allow access to multiple voice-assistant

AVM FritzBox 7530 press image courtesy of AVM GmBH

The AVM FRITZ!Box 7530 is a German example of home network hardware with European heritage

Technicolor, AVM and a few other European companies are creating home network hardware typically in the form of carrier-supplied home-network routers. It is although AVM are offering their Fritz lineup of of home-network hardware through the retail channel with one of these devices being the first home-network router to automatically update itself with the latest patches. In the case of Free.fr, their Freebox products are even heading to the same kind of user interface expected out of a recent Synology or QNAP NAS thanks to the continual effort to add more capabilities in these devices.

But Europe are putting the pedal to the metal when it comes to cloud computing, especially with the goal to assure European sovereignty over data handled this way. Qarnot, a French company, have engaged in the idea of computers that are part of a distributed-computing setup yielding their waste heat from data processing for keeping you warm or allowing you to have a warm shower at home. Now Germany is heading down the direction of a European-based public cloud for European data sovereignty.

There has been significant research conducted by various European institutions that have impacted our online lives. One example is Frauhofer Institute in Germany have contributed to the development of file-based digital audio in both the MP3 and AAC formats. Another group of examples represent efforts by various European public-service broadcasters to effectively bring about “smart radio” with “flagging” of traffic announcements, smart automatic station following, selection of broadcasters by genre or area and display of broadcast-content metadata through the ARI and RDS standards for FM radio and the evolution of DAB+ digital radio.

But what needs to happen and may will be happening is to establish and maintain Europe as a significantly-strong third force for consumer and business IT. As well, Europe needs to expose their technology and services towards people and organisations in other countries rather than focusing it towards the European, Middle Eastern and Northern African territories.

European technology companies would need to offer the potential worldwide customer base something that differentiates themselves from what American and Chinese vendors are offering. Here, they need to focus their products and services towards those customers who place importance on what European personal and business values are about.

What needs to be done at the national and EU level

Some countries like France and Germany implement campaigns that underscore products that are made within these countries. Here, they could take these “made in” campaigns further by promoting services that are built up in those countries and have most of their customers’ data within those countries. Similarly the European Union’s organs of power in Brussels could then create logos for use by IT hardware and software companies that are chartered in Europe and uphold European values.

At the moment Switzerland have taken a proactive step towards cultivating local software-development talent by running a “Best of Swiss Apps” contest. Here, it recognises Swiss app developers who have turned out excellent software for regular or mobile computing platforms. At the moment, this seems to focus on apps which primarily have Switzerland-specific appeal, typically front-ends to services offered by the Swiss public service or companies serving Swiss users.

Conclusion

One goal for Europe to achieve is a particular hardware, software or IT-services platform that can do what Airbus and Arianespace have done with aerospace. This is to raise some extraordinary products that place themselves on the world stage as a viable alternative to what the USA and China offer. As well, it puts the establishment on notice that they have to raise the bar for their products and services.

Send to Kindle

Why do I see Thunderbolt 3 and integrated graphics as a valid option set for laptops?

Dell XPS 13 8th Generation Ultrabook at QT Melbourne rooftop bar

The Dell XPS 13 series of ultraportable computers uses a combination of Intel integrated graphics and Thunderbolt 3 USB-C ports

Increasingly, laptop users want to make sure their computers earn their keep for computing activities that are performed away from their home or office. But they also want the ability to do some computer activities that demand more from these machines like playing advanced games or editing photos and videos.

What is this about?

Integrated graphics infrastructure like the Intel UHD and Iris Plus GPUs allows your laptop computers to run for a long time on their own batteries. It is thanks to the infrastructure using the system RAM to “paint” the images you see on the screen, along with being optimised for low-power mobile use. This is more so if the computer is equipped with a screen resolution of not more than the equivalent of Full HD (1080p) which also doesn’t put much strain on the computer’s battery capacity.

They may be seen as being suitable for day-to-day computing tasks like Web browsing, email or word-processing or lightweight multimedia and gaming activities while on the road. Even some games developers are working on capable playable video games that are optimised to run on integrated graphics infrastructure so you can play them on modest computer equipment or to while away a long journey.

There are some “everyday-use” laptop computers that are equipped with a discrete graphics processor along with the integrated graphics, with the host computer implementing automatic GPU-switching for energy efficiency. Typically the graphics processor doesn’t really offer much for performance-grade computing because it is a modest mobile-grade unit but may provide some “pep” for some games and multimedia tasks.

Thunderbolt 3 connection on a Dell XPS 13 2-in-1

But if your laptop has at least one Thunderbolt 3 USB-C port along with the integrated graphics infrastructure, it will open up another option. Here, you could use an external graphics module, also known as an eGPU unit, to add high-performance dedicated graphics to your computer while you are at home or the office. As well, these devices provide charging power for your laptop which, in most cases, would relegate the laptop’s supplied AC adaptor as an “on-the-road” or secondary charging option.

A use case often cited for this kind of setup is a university student who is studying on campus and wants to use the laptop in the library to do their studies or take notes during classes. They then want to head home, whether it is at student accommodation like a dorm / residence hall on the campus, an apartment or house that is shared by a group of students, or their parents’ home where it is within a short affordable commute from the campus. The use case typifies the idea of the computer being able to support gaming as a rest-and-recreation activity at home after all of what they need to do is done.

Razer Blade gaming Ultrabook connected to Razer Core external graphics module - press picture courtesy of Razer

Razer Core external graphics module with Razer Blade gaming laptop

Here, the idea is to use the external graphics module with the computer and a large-screen monitor have the graphics power come in to play during a video game. As well, if the external graphics module is portable enough, it may be about connecting the laptop to a large-screen TV installed in a common lounge area at their accommodation on an ad-hoc basis so they benefit from that large screen when playing a game or watching multimedia content.

The advantage in this use case would be to have the computer affordable enough for a student at their current point in life thanks to it not being kitted out with a dedicated graphics processor that may be seen as being hopeless. But the student can save towards an external graphics module of their choice and get that at a later time when they see fit. In some cases, it may be about using a “fit-for-purpose” graphics card like an NVIDIA Quadro with the eGPU if they maintain interest in that architecture or multimedia course.

It also extends to business users and multimedia producers who prefer to use a highly-portable laptop “on the road” but use an external graphics module “at base” for those activities that need extra graphics power. Examples of these include to render video projects or to play a more-demanding game as part of rest and relaxation.

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module press picture courtesy of Sonnet Systems

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables

There are a few small external graphics modules that are provided with a soldered-in graphics processor chip. These units, like the Sonnet Breakaway Puck, are small enough to pack in your laptop bag, briefcase or backpack and can be seen as an opportunity to provide “improved graphics performance” when near AC power. There will be some limitations with these devices like a graphics processor that is modest by “desktop gaming rig” or “certified workstation” standards; or having reduced connectivity for extra peripherals. But they will put a bit of “pep” in to your laptop’s graphics performance at least.

Some of these small external graphics modules would have come about as a way to dodge the “crypto gold rush” where traditional desktop-grade graphics cards were very scarce and expensive. This was due to them being used as part of cryptocurrency mining rigs to facilitate the “mining” of Bitcoin or Ethereum during that “gold rush”. The idea behind these external graphics modules was to offer enhanced graphics performance for those of us who wanted to play games or engage in multimedia editing rather than mine Bitcoin.

Who is heading down this path?

At the moment, most computer manufacturers are configuring a significant number of Intel-powered ultraportable computers along these lines i.e. with Intel integrated graphics and at least one Thunderbolt 3 port. A good example of this are the recent iterations of the Dell XPS 13 (purchase here) and some of the Lenovo ThinkPad X1 family like the ThinkPad X1 Carbon.

Of course some of the computer manufacturers are also offering laptop configurations with modest-spec discrete graphics silicon along with the integrated-graphics silicon and a Thunderbolt 3 port. This is typically pitched towards premium 15” computers including some slimline systems but these graphics processors may not put up much when it comes to graphics performance. In this case, they are most likely to be equivalent in performance to a current-spec baseline desktop graphics card.

The Thunderbolt 3 port on these systems would be about using something like a “card-cage” external graphics module with a high-performance desktop-grade graphics card to get more out of your games or advanced applications.

Trends affecting this configuration

The upcoming USB4 specification is meant to be able to bring Thunderbolt 3 capability to non-Intel silicon thanks to Intel assigning the intellectual property associated with Thunderbolt 3 to the USB Implementers Forum.

As well, Intel has put forward the next iteration of the Thunderbolt specification in the form of Thunderbolt 4. It is more of an evolutionary revision in relationship to USB4 and Thunderbolt 3 and will be part of their next iteration of their Core silicon. But it is also intended to be backwards compatible with these prior standards and uses the USB-C connector.

What can be done to further legitimise Thunderbolt 3 / USB4 and integrated graphics as a valid laptop configuration?

What needs to happen is that the use case for external graphics modules needs to be demonstrated with USB4 and subsequent technology. As well, this kind of setup needs to appear on AMD-equipped computers as well as devices that use silicon based on ARM microarchitecture, along with Intel-based devices.

Personally, I would like to see the Thunderbolt 3 or USB4 technology being made available to more of the popularly-priced laptops made available to householders and small businesses. It would be with an ideal to allow the computer’s user to upgrade towards better graphics at a later date by purchasing an external graphics module.

This is in addition to a wide range of external graphics modules available for these computers with some capable units being offered at affordable price points. I would also like to see more of the likes of the Lenovo Legion BoostStation “card-cage” external graphics module that have the ability for users to install storage devices like hard disks or solid-state drives in addition to the graphics card. Here, these would please those of us who want extra “offload” storage or a “scratch disk” just for use at their workspace. They would also help people who are moving from the traditional desktop computer to a workspace centred around a laptop.

Conclusion

The validity of a laptop computer being equipped with a Thunderbolt 3 or similar port and an integrated graphics chipset is to be recognised. This is more so where the viability of improving on one of these systems using an external graphics module that has a fit-for-purpose dedicated graphics chipset can be considered.

Send to Kindle

Do I see regular computing target i86 and ARM microarchitectures?

Lenovo Yoga 5G convertible notebook press image courtesy of Lenovo

Lenovo Flex 5G / Yoga 5G convertible notebook which runs Windows on Qualcomm ARM silicon – the first laptop computer to have 5G mobile broadband on board

Increasingly, regular computers are moving towards the idea of having processor power based around either classic Intel (i86/i64) or ARM RISC microarchitectures. This is being driven by the idea of portable computers heading towards the latter microarchitecture as a power-efficiency measure with this concept driven by its success with smartphones and tablets.

It is undertaking a different approach to designing silicon, especially RISC-based silicon, where different entities are involved in design and manufacturing. Previously, Motorola was taking the same approach as Intel and other silicon vendors to designing and manufacturing their desktop-computing CPUs and graphics infrastructure. Now ARM have taken the approach of designing the microarchitecture themselves and other entities like Samsung and Qualcomm designing and fabricating the exact silicon for their devices.

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

Apple to move the Macintosh platform to their own ARM RISC silicon

A key driver of this is Microsoft with their Always Connected PC initiative which uses Qualcomm ARM silicon similar to what is used in a smartphone or tablet. This is to have the computer able to work on basic productivity tasks for a whole day without needing to be on AC power. Then Apple intended to pull away from Intel and use their own ARM-based silicon for their Macintosh regular computers, a symptom of them going back to the platform’s RISC roots but not in a monolithic manner.

As well, the Linux community have established Linux-based operating systems on ARM microarchitectore. This has led to Google running Android on ARM-based mobile and set-top devices and offering a Chromebook that uses ARM silicon; along with Apple implementing it in their operating systems. Not to mention the many NAS devices and other home-network hardware that implement ARM silicon.

Initially the RISC-based computing approach was about more sophisticated use cases like multimedia or “workstation-class” computing compared to basic word-processing and allied computing tasks. Think of the early Apple Macintosh computers, the Commodore Amiga with its many “demos” and games, or the RISC/UNIX workstations like the Sun SPARCStation that existed in the late 80s and early 90s. Now it is about power and thermal efficiency for a wide range of computing tasks, especially where portable or low-profile devices are concerned.

Software development

Already mobile and set-top devices use ARM silicon

I will see an expectation for computer operating systems and application software to be written and compiled for both classic Intel i86 and ARM RISC microarchitectures.  This will require software development tools to support compiling and debugging on both platforms and, perhaps, microarchitecture-agnostic application-programming approaches.  It is also driven by the use of ARM RISC microarchitecture on mobile and set-top/connected-TV computing environments with a desire to allow software developers to have software that is useable across all computing environments.

WD MyCloud EX4100 NAS press image courtesy of Western Digital

.. as do a significant number of NAS units like this WD MyCloud EX4100 NAS

Some software developers, usually small-time or bespoke-solution developers, will end up using “managed” software development environments like Microsoft’s .NET Framework or Java. These will allow the programmer to turn out a machine-executable file that is dependent on pre-installed run-time elements for it to run. These run-time elements will be installed in a manner that is specific to the host computer’s microarchitecture and make use of the host computer’s needs and capabilities. These environments may allow the software developer to “write once run anywhere” without knowing if the computer  the software is to run on uses an i86 or ARM microarchitecture.

There may also be an approach towards “one-machine two instruction-sets” software development environments to facilitate this kind of development where the goal is to simply turn out a fully-compiled executable file for both instruction sets.

It could be in an accepted form like run-time emulation or machine-code translation as what is used to allow MacOS or Windows to run extant software written for different microarchitectures. Or one may have to look at what went on with some early computer platforms like the Apple II where the use of a user-installable co-processor card with the required CPU would allow the computer to run software for another microarchitecture and platform.

Computer Hardware Vendors

For computer hardware vendors, there will be an expectation towards positioning ARM-based silicon towards high-performance power-efficient computing. This may be about highly-capable laptops that can do a wide range of computing tasks without running out of battery power too soon. Or “all-in-one” and low-profile desktop computers will gain increased legitimacy when it comes to high-performance computing while maintaining the svelte looks.

Personally, if ARM-based computing was to gain significant traction, it may have to be about Microsoft encouraging silicon vendors other than Qualcomm to offer ARM-based CPUs and graphics processors fit for “regular” computers. As well, Microsoft and the Linux community may have to look towards legitimising “performance-class” computing tasks like “core” gaming and workstation-class computing on that microarchitecture.

There may be the idea of using 64-bit i86 microarchitecture as a solution for focused high-performance work. This may be due to a large amount of high-performance software code written to run with the classic Intel and AMD silicon. It will most likely exist until a significant amount of high-performance software is written to run natively with ARM silicon.

Conclusion

Thanks to Apple and Microsoft heading towards ARM RISC microarchitecture, the computer hardware and software community will have to look at working with two different microarchitectures especially when it comes to regular computers.

Send to Kindle

The Dell XPS 13 is now seen as the benchmark for Windows Ultrabooks

Other reviews in the computer press

The Dell XPS 13 Kaby Lake edition – what has defined the model as far as what it offers

Dell XPS 13 (2019) review: | CNet

Dell XPS 13 (2019) Review | Laptop Mag

Dell XPS 13 (2019) review: the right stuff, refined | The Verge

Review: Dell XPS 13 (2019) | Wired

Dell XPS 13 review (2020) | Tom’s Guide

Previous coverage on HomeNetworking01.info

A 13” traditional laptop found to tick the boxes

Dell’s XPS 13 convertible laptop underscores value for money for its class

This year’s computing improvements from Dell (2019)

Reviews of previous generations of the Dell XPS 13

Clamshell variants

First generation (Sandy Bridge)

2017 Kaby Lake

2018 8th Generation

2-in-1 convertible variants

2017 Kaby Lake

My Comments

Of late, the personal-IT press have identified a 13” ultraportable laptop computer that has set a benchmark when it comes to consumer-focused computers of that class. This computer is the Dell XPS 13 family of Ultrabooks which are a regular laptop computer family that runs Windows and is designed for portability.

What makes these computers special?

A key factor about the way Dell had worked on the XPS 13 family of Ultrabooks was to make sure the ultraportable laptops had the important functions necessary for this class of computer. They also factored in the durability aspect because if you are paying a pretty penny for a computer, you want to be sure it lasts.

As well, it was all part of assuring that the end-user got value for money when it came to purchasing an ultraportable laptop computer.

In a previous article that I wrote about the Dell XPS 13, I compared it to the National Panasonic mid-market VHS videocassette recorders offered since the mid 1980s to the PAL/SECAM (Europe, Australasia, Asia) market; and the Sony mid-market MiniDisc decks offered through the mid-late 1990s. Both these product ranges were worked with the focus on offering the features and performance that count for most users at a price that offers value for money and is “easy to stomach”.

Through the generations, Dell introduced the very narrow bezel for the screen but this required the typical camera module to be mounted under the screen. That earnt some criticism in the computing press due to it “looking up at the user’s nose”. For the latest generation, Dell developed a very small camera module that can exist at the top of the screen but maintain the XPS 13’s very narrow bezel.

The Dell XPS 13 Kaby Lake 2-in-1 convertible Ultrabook variant

The Dell XPS 13 is able to be specified with the three different Intel Core CPU grades (i3, i5 and i7) and users could specify it to be equipped with a 4K UHD display option. The ultraportable laptop will have Intel integrated graphics infrastructure but the past two generations of the Dell XPS 13 are equipped with two Thunderbolt 3 ports so you can use it with an external graphics module if you want improved graphics performance.

There was some doubt about Dell introducing a 2-in-1 convertible variant of the XPS 13 due to it being perceived as a gimmick rather than something that is of utility. But they introduced the convertible variant of this Ultrabook as part of the 2017 Kaby Lake generation. It placed Dell in a highly-competitive field of ultraportable convertible computers and could easily place a focus towards “value-focused” 2-in-1 ultraportables.

What will this mean for Dell and the personal computer industry?

Dell XPS 13 9380 Webcam detail press picture courtesy of Dell Corporation

Thin Webcam circuitry atop display rectifies the problem associated with videocalls made on the Dell XPS 13

The question that will come about is how far can Dell go towards improving this computer. At the moment, it could be about keeping each generation of the XPS 13 Ultrabook in step with the latest mobile-focused silicon and mobile-computing technologies. They could also be ending up with a 14” clamshell variant of this computer for those of us wanting a larger screen size for something that comfortably fits on the economy-class airline tray table.

For the 2-in-1 variant, Dell could even bring the XPS 13 to a point where it is simply about value for money compared to other 13” travel-friendly convertible ultraportables. Here, they would underscore the features that every user of that class of computer needs, especially when it comes to “on-the-road” use, along with preserving a durable design.

Other computer manufacturers will also be looking at the Dell XPS 13 as the computer to match, if not beat, when it comes to offering value for money in their 13” travel-friendly clamshell ultraportable range. This can include companies heavily present in particular market niches like enterprise computing who will use what Dell is offering and shoehorn it to their particular niche.

Best value configuration suggestions

Most users could get by with a Dell XPS 13 that uses an Intel Core i5 CPU, 8Gb RAM and at least 256Gb solid-state storage. You may want to pay more for an i7 CPU and/or 16Gb RAM if you are chasing more performance or to spend more on a higher storage capacity if you are storing more data while away.

If there is an expectation to use your XPS 13 on the road, it would be wise to avoid the 4K UHD screen option due to the fact that this resolution could make your Ultrabook more thirstier to run on its own battery.

The 2-in-1 convertible variant is worth considering if you are after this value-priced ultraportable in a “Yoga-style” convertible form.

Conclusion

What I have found through my experience with the Dell XPS 13 computers along with the computer-press write-ups about them is that Dell has effectively defined a benchmark when it comes to an Intel-powered travel-friendly ultraportable laptop computer.

Send to Kindle

What do I mean by a native client for an online service?

Facebook Messenger Windows 10 native client

Facebook Messenger – now native on Windows 10

With the increasing number of online services including cloud and “as-a-service” computing arrangements, there has to be a way for users to gain access to these services.

Previously, the common way was to use a Web-based user interface where the user has to run a Web browser to gain access to the online service. The user will see the Web browser’s interface elements, also known as “chrome” as part of the user experience. This used to be very limiting when it came to functionality but various revisions have allowed for a similar kind of functionality to regular apps.

Dropbox native client view for Windows 8 Desktop

Dropbox native client view for Windows 8 Desktop- taking advantage of what Windows Explorer offers

A variant on this theme is a “Web app” which provides a user interface without the Web browser’s interface elements. But the Web browser works as an interpreter between the online service and the user interface although the user doesn’t see it as that. It is appealing as an approach to writing online service clients due to the idea of “write once run anywhere”.

Another common approach is to write an app that is native to a particular computing platform and operating system. These apps, which I describe as “native clients” are totally optimised in performance and functionality for that computing platform. This is because there isn’t the need for overhead associated with a Web browser needing to interpret code associated with a Web page. As well, the software developer can take advantage of what the computing platform and operating system offers even before the Web browser developers build support for the function in to their products.

There are some good examples of online-service native clients having an advantage over Web apps or Web pages. One of these is messaging and communications software. Here, a user may want to use an instant-messaging program to communicate with their friends or colleagues while using graphics software or games which place demands on the computer. Here, a native instant-messaging client can run alongside the high-demand program without the overhead associated with a Web browser.

The same situation can apply to online games where players can see a perceived improvement in their performance. As well, it is easier for the software developer to write them to take advantage of higher-performance processing silicon. It includes designing an online game for a portable computing platform that optimises itself for either external power or battery power.

This brings me to native-client apps that are designed for a particular computing platform from the outset. One key application is to provide a user interface that is optimised for “lean-hack” operation, something that is demanded of anything that is about TV and video content. The goal often is to support a large screen viewed at a distance along with the user navigating the interface using a remote control that has a D-pad and, perhaps a numeric keypad. The remote control describes the primary kind of user interface that most smart TVs and set-top boxes offer.

Another example is software that is written to work with online file-storage services. Here, a native client for these services can be written to expose your files at the online file-storage service as if it is another file collection similar to your computer’s hard disk or removeable medium.

Let’s not forget that native-client apps can be designed to make best use of application-specific peripherals due to them directly working with the operating system. This can work well with setups that demand the use of application-specific peripherals, including the so-called “app-cessory” setups for various devices.

When an online platform is being developed, the client software developers shouldn’t forget the idea of creating native client software that works tightly with the host computer platform.

Send to Kindle

Why I support multiple accounts with online media endpoints at home?

Apple TV 4th Generation press picture courtesy of Apple

The Apple TV set-top box – an example of a popular online-media platform

It is so easy to think of the idea of one person associated with an account-based online media service that is run on a commonly-used online media device. The classic example of this is a smart TV or set-top box that is installed in the main living room. It also extends to smart speakers, Internet radios and network-capable audio setups that work with various online audio content services.

There is a reality that many adults will end up using the same device like the aforementioned smart TV. But a lot of online-media services like Netflix, the broadcast video-on-demand services run by the free-to-air TV broadcasters or online audio services implement user-account-driven operation so customers benefit from their subscription or user-experience personalisation like “favourite shows” lists. With these smart TVs or similar devices, you can only associate the device with one user account for each of these services. This assumes that one person owns and operates the device.

Dish Joey 4K set-top box press picture courtesy of Dish Networks America

Set-top boxes connected to TVs in common areas are used by many people

It is although Apple has started work with having one Apple TV device work with multiple Apple ID user accounts, leading towards concurrent operation of these accounts in tvOS 13. But, at the moment, this only works with Apple-provided online services that are bound to end-users’ Apple IDs.

This reality is driven by the rise in multi-generational households with adult children living under the same roof as their parents. That has come about due to strong financial pressures on young people driven by costly housing in major cities, whether owned or rented. It goes along with that long-time adult reality of maintaining personal relationships under the same roof, while other adults end up staying at the home of another person they are friendly with as a temporary measure. As well, younger adults are increasingly living in share-houses in order to split their living costs easily amongst each other.

Dell Inspiron 14 5000 2-in-1 - viewer arrangement at Rydges Melbourne (Locanda)

An online media account set up on a laptop, tablet or smartphone is typically set up for one user having exclusive use of that device

But a significant number of the accounts for the various online-media services are established on computing devices that are primarily or exclusively used by a single adult. Then a person may decide to register their online-media service account on a commonly-used online-media device to use their subscription or customisations there.

The problem that easily happens is that other people cannot operate their accounts for the same service on that same device thus losing the benefit of their customisations being valid at that device. Or if they do so, they have to complete a rigmarole of logging others out before they log in, with some services having a login procedure requiring you to enter usernames and passwords on the media device using that dreaded “pick-and-choose” method even if the service was set up using social sign-in.

What does the single account problem affect?

Netflix menu screen - favourites

Shows you have marked as “favourite” for your profile in your Netflix account

The situation can also affect the account that is associated with the commonly-used device in a number of ways. This is more so with the content recommendation engines that most online media services implement which help in the discovery of new content that may be of interest. The behaviour of these engines manifests in the form of a “recommended content” playlist that appears on the service’s homepage, the customer email that is sent out to each of the service’s customers with a list of recommended content or a content suggestion that appears at the end of content you were engaging with.

SBS On Demand - favourites screenshot

Another example of shows you have marked as favourite – this time on SBS On Demand

Here, you may have “steered” SBS On Demand’s content recommendation engine to bring up European thrillers due to you watching these shows. But someone else comes in with a penchant for, perhaps, Indian Bollywood content. They binge on episodes of this content and you end up with the recommended-content list diluted with Indian content.

SBS On Demand - recommendations screenshot

The recommended-content playlist like this one can be diluted when there is one account shared by many with different tastes like with SBS On Demand

Another area where this will affect is the list of favourite shows or currently-viewing series that these services keep. Here, you use these lists to identify where you are up to in a show or series you are viewing. Similarly, your member email may alert you to new seasons of your favourite series or if the show is to be removed from the service. But if you started working through a show or series on a computing device you exclusively use but want to continue it on the large-screen TV bound to someone else’s account, you won’t be able to do so unless you log in with your account to continue your viewing there.

In the same context, it doesn’t permit a user who is enjoying the content on the account associated with the commonly-used device to another device associated with their own account. This may be of concern if, for example, you commenced viewing of an episode of a binge-worthy series on the main TV in the house’s main living area but had to continue it on your 2-in-1 laptop in your bedroom because someone else wants to do something else.

Common workarounds

Using a setup like AirPlay, Chromecast or hard-wired connectivity to link your own computing device to the large-screen TV may be seen as a workaround for access to your account even if the set or main set-top device is associated with another account.

But this can yield problems like mobile devices not yielding a best-quality picture with a hard-wired connection or the existence of an Apple TV, Chromecast, Android TV setup or appropriate cable that is connected to the TV you want to use. Let alone it not being feasible to carry that desktop computer of yours around to the main TV to watch that Netflix show there using your account and its customisations. Or your smartphone or tablet going to sleep and interrupting your viewing due to it taking battery-conservation measures or simply running out of battery power.

You may find that connecting multiple set-top boxes or similar devices to the main TV with each one bound to different accounts may exist as another workaround. This is typically demonstrated by the use of a games console bound to its owner’s online media service accounts connected to a Smart TV that is bound to someone else’s online-media-service accounts.

But this can look very ugly, become less useable and you may not have enough HDMI ports on your TV or audio peripherals (soundbar, AV receiver) to cater for each set-top device bound to each individual household member’s accounts. It is made worse by most TVs having up to 3 HDMI inputs and most popularly-priced audio peripherals only having the one HDMI-ARC connection to the TV.

What can be done?

An online media service that works through a particular online media endpoint device could support multiple logins with the number being this side of 10.

Here, you could have an option to add or delete extra accounts to the online media-service interface as if you are managing your own account on that interface. The authentication process for adding accounts would be the same as for your own account, whether through supplying a username and password or transcribing an on-screen number in to the Website or mobile app for that service to enrol a limited-interface device.

A question that will come up is whether to have the accounts concurrently operating with the device exposing the customisations associated with each account on the same interface; or require the end-users to switch accounts for exclusive operation when they want to use their account.

Concurrent operation may be considered of relevance to, for example, a couple who watching their shows with each other whereas exclusive operation may come in to its own with an adult who watches their shows by themselves. This can also help with building out content recommendations or the online-media service keeping track of the popularity of a particular piece of content including how it is enjoyed.

What features can this add to online media consumption?

One feature would be the ability to easily enjoy the same content across different devices associated with your account, no matter whether they are exclusive to your account or not. This would benefit where you are working through the same content in different locations like hearing a playlist from that online music service in the car, or at home on the hi-fi; or watching that series on an iPad while you come home from work on the train then continuing it on the TV in the main lounge area at home.

Concurrent operation could also allow for an amalgamated content-choice experience, perhaps with separate menus or playlists for each person. It can extend to providing a list of common favourites or content recommendations that appeal “across the board”.

You also make sure that the content recommendations offered by the online media service reflect your content-consumption habits rather than be diluted by someone else’s choices. This is more so for music or video content that you enjoy and you want to discover similar content.

In some cases, you could have the ability to have the content-recommendation engine come up with content that appeals to the tastes represented by a group of accounts like a household rather than just one account. Such recommendations could be listed alongside account-specific recommendations lists.

Conclusion

What needs to be considered as the rise of online multimedia consumption occurs is the ability for multiple online media-service accounts to be used for the same service on the same device. This means that these services can work well with the reality of multiple-adult households such as couples or multi-generation households.

It then means that the service is personalised to each end-user’s tastes and the content recommendation system in these services reflects what they watch.

Send to Kindle