Commentary Archive

Why do I see Thunderbolt 3 and integrated graphics as a valid option set for laptops?

Dell XPS 13 8th Generation Ultrabook at QT Melbourne rooftop bar

The Dell XPS 13 series of ultraportable computers uses a combination of Intel integrated graphics and Thunderbolt 3 USB-C ports

Increasingly, laptop users want to make sure their computers earn their keep for computing activities that are performed away from their home or office. But they also want the ability to do some computer activities that demand more from these machines like playing advanced games or editing photos and videos.

What is this about?

Integrated graphics infrastructure like the Intel UHD and Iris Plus GPUs allows your laptop computers to run for a long time on their own batteries. It is thanks to the infrastructure using the system RAM to “paint” the images you see on the screen, along with being optimised for low-power mobile use. This is more so if the computer is equipped with a screen resolution of not more than the equivalent of Full HD (1080p) which also doesn’t put much strain on the computer’s battery capacity.

They may be seen as being suitable for day-to-day computing tasks like Web browsing, email or word-processing or lightweight multimedia and gaming activities while on the road. Even some games developers are working on capable playable video games that are optimised to run on integrated graphics infrastructure so you can play them on modest computer equipment or to while away a long journey.

There are some “everyday-use” laptop computers that are equipped with a discrete graphics processor along with the integrated graphics, with the host computer implementing automatic GPU-switching for energy efficiency. Typically the graphics processor doesn’t really offer much for performance-grade computing because it is a modest mobile-grade unit but may provide some “pep” for some games and multimedia tasks.

Thunderbolt 3 connection on a Dell XPS 13 2-in-1

But if your laptop has at least one Thunderbolt 3 USB-C port along with the integrated graphics infrastructure, it will open up another option. Here, you could use an external graphics module, also known as an eGPU unit, to add high-performance dedicated graphics to your computer while you are at home or the office. As well, these devices provide charging power for your laptop which, in most cases, would relegate the laptop’s supplied AC adaptor as an “on-the-road” or secondary charging option.

A use case often cited for this kind of setup is a university student who is studying on campus and wants to use the laptop in the library to do their studies or take notes during classes. They then want to head home, whether it is at student accommodation like a dorm / residence hall on the campus, an apartment or house that is shared by a group of students, or their parents’ home where it is within a short affordable commute from the campus. The use case typifies the idea of the computer being able to support gaming as a rest-and-recreation activity at home after all of what they need to do is done.

Razer Blade gaming Ultrabook connected to Razer Core external graphics module - press picture courtesy of Razer

Razer Core external graphics module with Razer Blade gaming laptop

Here, the idea is to use the external graphics module with the computer and a large-screen monitor have the graphics power come in to play during a video game. As well, if the external graphics module is portable enough, it may be about connecting the laptop to a large-screen TV installed in a common lounge area at their accommodation on an ad-hoc basis so they benefit from that large screen when playing a game or watching multimedia content.

The advantage in this use case would be to have the computer affordable enough for a student at their current point in life thanks to it not being kitted out with a dedicated graphics processor that may be seen as being hopeless. But the student can save towards an external graphics module of their choice and get that at a later time when they see fit. In some cases, it may be about using a “fit-for-purpose” graphics card like an NVIDIA Quadro with the eGPU if they maintain interest in that architecture or multimedia course.

It also extends to business users and multimedia producers who prefer to use a highly-portable laptop “on the road” but use an external graphics module “at base” for those activities that need extra graphics power. Examples of these include to render video projects or to play a more-demanding game as part of rest and relaxation.

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module press picture courtesy of Sonnet Systems

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables

There are a few small external graphics modules that are provided with a soldered-in graphics processor chip. These units, like the Sonnet Breakaway Puck, are small enough to pack in your laptop bag, briefcase or backpack and can be seen as an opportunity to provide “improved graphics performance” when near AC power. There will be some limitations with these devices like a graphics processor that is modest by “desktop gaming rig” or “certified workstation” standards; or having reduced connectivity for extra peripherals. But they will put a bit of “pep” in to your laptop’s graphics performance at least.

Some of these small external graphics modules would have come about as a way to dodge the “crypto gold rush” where traditional desktop-grade graphics cards were very scarce and expensive. This was due to them being used as part of cryptocurrency mining rigs to facilitate the “mining” of Bitcoin or Ethereum during that “gold rush”. The idea behind these external graphics modules was to offer enhanced graphics performance for those of us who wanted to play games or engage in multimedia editing rather than mine Bitcoin.

Who is heading down this path?

At the moment, most computer manufacturers are configuring a significant number of Intel-powered ultraportable computers along these lines i.e. with Intel integrated graphics and at least one Thunderbolt 3 port. A good example of this are the recent iterations of the Dell XPS 13 (purchase here) and some of the Lenovo ThinkPad X1 family like the ThinkPad X1 Carbon.

Of course some of the computer manufacturers are also offering laptop configurations with modest-spec discrete graphics silicon along with the integrated-graphics silicon and a Thunderbolt 3 port. This is typically pitched towards premium 15” computers including some slimline systems but these graphics processors may not put up much when it comes to graphics performance. In this case, they are most likely to be equivalent in performance to a current-spec baseline desktop graphics card.

The Thunderbolt 3 port on these systems would be about using something like a “card-cage” external graphics module with a high-performance desktop-grade graphics card to get more out of your games or advanced applications.

Trends affecting this configuration

The upcoming USB4 specification is meant to be able to bring Thunderbolt 3 capability to non-Intel silicon thanks to Intel assigning the intellectual property associated with Thunderbolt 3 to the USB Implementers Forum.

As well, Intel has put forward the next iteration of the Thunderbolt specification in the form of Thunderbolt 4. It is more of an evolutionary revision in relationship to USB4 and Thunderbolt 3 and will be part of their next iteration of their Core silicon. But it is also intended to be backwards compatible with these prior standards and uses the USB-C connector.

What can be done to further legitimise Thunderbolt 3 / USB4 and integrated graphics as a valid laptop configuration?

What needs to happen is that the use case for external graphics modules needs to be demonstrated with USB4 and subsequent technology. As well, this kind of setup needs to appear on AMD-equipped computers as well as devices that use silicon based on ARM microarchitecture, along with Intel-based devices.

Personally, I would like to see the Thunderbolt 3 or USB4 technology being made available to more of the popularly-priced laptops made available to householders and small businesses. It would be with an ideal to allow the computer’s user to upgrade towards better graphics at a later date by purchasing an external graphics module.

This is in addition to a wide range of external graphics modules available for these computers with some capable units being offered at affordable price points. I would also like to see more of the likes of the Lenovo Legion BoostStation “card-cage” external graphics module that have the ability for users to install storage devices like hard disks or solid-state drives in addition to the graphics card. Here, these would please those of us who want extra “offload” storage or a “scratch disk” just for use at their workspace. They would also help people who are moving from the traditional desktop computer to a workspace centred around a laptop.

Conclusion

The validity of a laptop computer being equipped with a Thunderbolt 3 or similar port and an integrated graphics chipset is to be recognised. This is more so where the viability of improving on one of these systems using an external graphics module that has a fit-for-purpose dedicated graphics chipset can be considered.

Send to Kindle

Do I see regular computing target i86 and ARM microarchitectures?

Lenovo Yoga 5G convertible notebook press image courtesy of Lenovo

Lenovo Flex 5G / Yoga 5G convertible notebook which runs Windows on Qualcomm ARM silicon – the first laptop computer to have 5G mobile broadband on board

Increasingly, regular computers are moving towards the idea of having processor power based around either classic Intel (i86/i64) or ARM RISC microarchitectures. This is being driven by the idea of portable computers heading towards the latter microarchitecture as a power-efficiency measure with this concept driven by its success with smartphones and tablets.

It is undertaking a different approach to designing silicon, especially RISC-based silicon, where different entities are involved in design and manufacturing. Previously, Motorola was taking the same approach as Intel and other silicon vendors to designing and manufacturing their desktop-computing CPUs and graphics infrastructure. Now ARM have taken the approach of designing the microarchitecture themselves and other entities like Samsung and Qualcomm designing and fabricating the exact silicon for their devices.

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

Apple to move the Macintosh platform to their own ARM RISC silicon

A key driver of this is Microsoft with their Always Connected PC initiative which uses Qualcomm ARM silicon similar to what is used in a smartphone or tablet. This is to have the computer able to work on basic productivity tasks for a whole day without needing to be on AC power. Then Apple intended to pull away from Intel and use their own ARM-based silicon for their Macintosh regular computers, a symptom of them going back to the platform’s RISC roots but not in a monolithic manner.

As well, the Linux community have established Linux-based operating systems on ARM microarchitectore. This has led to Google running Android on ARM-based mobile and set-top devices and offering a Chromebook that uses ARM silicon; along with Apple implementing it in their operating systems. Not to mention the many NAS devices and other home-network hardware that implement ARM silicon.

Initially the RISC-based computing approach was about more sophisticated use cases like multimedia or “workstation-class” computing compared to basic word-processing and allied computing tasks. Think of the early Apple Macintosh computers, the Commodore Amiga with its many “demos” and games, or the RISC/UNIX workstations like the Sun SPARCStation that existed in the late 80s and early 90s. Now it is about power and thermal efficiency for a wide range of computing tasks, especially where portable or low-profile devices are concerned.

Software development

Already mobile and set-top devices use ARM silicon

I will see an expectation for computer operating systems and application software to be written and compiled for both classic Intel i86 and ARM RISC microarchitectures.  This will require software development tools to support compiling and debugging on both platforms and, perhaps, microarchitecture-agnostic application-programming approaches.  It is also driven by the use of ARM RISC microarchitecture on mobile and set-top/connected-TV computing environments with a desire to allow software developers to have software that is useable across all computing environments.

WD MyCloud EX4100 NAS press image courtesy of Western Digital

.. as do a significant number of NAS units like this WD MyCloud EX4100 NAS

Some software developers, usually small-time or bespoke-solution developers, will end up using “managed” software development environments like Microsoft’s .NET Framework or Java. These will allow the programmer to turn out a machine-executable file that is dependent on pre-installed run-time elements for it to run. These run-time elements will be installed in a manner that is specific to the host computer’s microarchitecture and make use of the host computer’s needs and capabilities. These environments may allow the software developer to “write once run anywhere” without knowing if the computer  the software is to run on uses an i86 or ARM microarchitecture.

There may also be an approach towards “one-machine two instruction-sets” software development environments to facilitate this kind of development where the goal is to simply turn out a fully-compiled executable file for both instruction sets.

It could be in an accepted form like run-time emulation or machine-code translation as what is used to allow MacOS or Windows to run extant software written for different microarchitectures. Or one may have to look at what went on with some early computer platforms like the Apple II where the use of a user-installable co-processor card with the required CPU would allow the computer to run software for another microarchitecture and platform.

Computer Hardware Vendors

For computer hardware vendors, there will be an expectation towards positioning ARM-based silicon towards high-performance power-efficient computing. This may be about highly-capable laptops that can do a wide range of computing tasks without running out of battery power too soon. Or “all-in-one” and low-profile desktop computers will gain increased legitimacy when it comes to high-performance computing while maintaining the svelte looks.

Personally, if ARM-based computing was to gain significant traction, it may have to be about Microsoft encouraging silicon vendors other than Qualcomm to offer ARM-based CPUs and graphics processors fit for “regular” computers. As well, Microsoft and the Linux community may have to look towards legitimising “performance-class” computing tasks like “core” gaming and workstation-class computing on that microarchitecture.

There may be the idea of using 64-bit i86 microarchitecture as a solution for focused high-performance work. This may be due to a large amount of high-performance software code written to run with the classic Intel and AMD silicon. It will most likely exist until a significant amount of high-performance software is written to run natively with ARM silicon.

Conclusion

Thanks to Apple and Microsoft heading towards ARM RISC microarchitecture, the computer hardware and software community will have to look at working with two different microarchitectures especially when it comes to regular computers.

Send to Kindle

The Dell XPS 13 is now seen as the benchmark for Windows Ultrabooks

Other reviews in the computer press

The Dell XPS 13 Kaby Lake edition – what has defined the model as far as what it offers

Dell XPS 13 (2019) review: | CNet

Dell XPS 13 (2019) Review | Laptop Mag

Dell XPS 13 (2019) review: the right stuff, refined | The Verge

Review: Dell XPS 13 (2019) | Wired

Dell XPS 13 review (2020) | Tom’s Guide

Previous coverage on HomeNetworking01.info

A 13” traditional laptop found to tick the boxes

Dell’s XPS 13 convertible laptop underscores value for money for its class

This year’s computing improvements from Dell (2019)

Reviews of previous generations of the Dell XPS 13

Clamshell variants

First generation (Sandy Bridge)

2017 Kaby Lake

2018 8th Generation

2-in-1 convertible variants

2017 Kaby Lake

My Comments

Of late, the personal-IT press have identified a 13” ultraportable laptop computer that has set a benchmark when it comes to consumer-focused computers of that class. This computer is the Dell XPS 13 family of Ultrabooks which are a regular laptop computer family that runs Windows and is designed for portability.

What makes these computers special?

A key factor about the way Dell had worked on the XPS 13 family of Ultrabooks was to make sure the ultraportable laptops had the important functions necessary for this class of computer. They also factored in the durability aspect because if you are paying a pretty penny for a computer, you want to be sure it lasts.

As well, it was all part of assuring that the end-user got value for money when it came to purchasing an ultraportable laptop computer.

In a previous article that I wrote about the Dell XPS 13, I compared it to the National Panasonic mid-market VHS videocassette recorders offered since the mid 1980s to the PAL/SECAM (Europe, Australasia, Asia) market; and the Sony mid-market MiniDisc decks offered through the mid-late 1990s. Both these product ranges were worked with the focus on offering the features and performance that count for most users at a price that offers value for money and is “easy to stomach”.

Through the generations, Dell introduced the very narrow bezel for the screen but this required the typical camera module to be mounted under the screen. That earnt some criticism in the computing press due to it “looking up at the user’s nose”. For the latest generation, Dell developed a very small camera module that can exist at the top of the screen but maintain the XPS 13’s very narrow bezel.

The Dell XPS 13 Kaby Lake 2-in-1 convertible Ultrabook variant

The Dell XPS 13 is able to be specified with the three different Intel Core CPU grades (i3, i5 and i7) and users could specify it to be equipped with a 4K UHD display option. The ultraportable laptop will have Intel integrated graphics infrastructure but the past two generations of the Dell XPS 13 are equipped with two Thunderbolt 3 ports so you can use it with an external graphics module if you want improved graphics performance.

There was some doubt about Dell introducing a 2-in-1 convertible variant of the XPS 13 due to it being perceived as a gimmick rather than something that is of utility. But they introduced the convertible variant of this Ultrabook as part of the 2017 Kaby Lake generation. It placed Dell in a highly-competitive field of ultraportable convertible computers and could easily place a focus towards “value-focused” 2-in-1 ultraportables.

What will this mean for Dell and the personal computer industry?

Dell XPS 13 9380 Webcam detail press picture courtesy of Dell Corporation

Thin Webcam circuitry atop display rectifies the problem associated with videocalls made on the Dell XPS 13

The question that will come about is how far can Dell go towards improving this computer. At the moment, it could be about keeping each generation of the XPS 13 Ultrabook in step with the latest mobile-focused silicon and mobile-computing technologies. They could also be ending up with a 14” clamshell variant of this computer for those of us wanting a larger screen size for something that comfortably fits on the economy-class airline tray table.

For the 2-in-1 variant, Dell could even bring the XPS 13 to a point where it is simply about value for money compared to other 13” travel-friendly convertible ultraportables. Here, they would underscore the features that every user of that class of computer needs, especially when it comes to “on-the-road” use, along with preserving a durable design.

Other computer manufacturers will also be looking at the Dell XPS 13 as the computer to match, if not beat, when it comes to offering value for money in their 13” travel-friendly clamshell ultraportable range. This can include companies heavily present in particular market niches like enterprise computing who will use what Dell is offering and shoehorn it to their particular niche.

Best value configuration suggestions

Most users could get by with a Dell XPS 13 that uses an Intel Core i5 CPU, 8Gb RAM and at least 256Gb solid-state storage. You may want to pay more for an i7 CPU and/or 16Gb RAM if you are chasing more performance or to spend more on a higher storage capacity if you are storing more data while away.

If there is an expectation to use your XPS 13 on the road, it would be wise to avoid the 4K UHD screen option due to the fact that this resolution could make your Ultrabook more thirstier to run on its own battery.

The 2-in-1 convertible variant is worth considering if you are after this value-priced ultraportable in a “Yoga-style” convertible form.

Conclusion

What I have found through my experience with the Dell XPS 13 computers along with the computer-press write-ups about them is that Dell has effectively defined a benchmark when it comes to an Intel-powered travel-friendly ultraportable laptop computer.

Send to Kindle

What do I mean by a native client for an online service?

Facebook Messenger Windows 10 native client

Facebook Messenger – now native on Windows 10

With the increasing number of online services including cloud and “as-a-service” computing arrangements, there has to be a way for users to gain access to these services.

Previously, the common way was to use a Web-based user interface where the user has to run a Web browser to gain access to the online service. The user will see the Web browser’s interface elements, also known as “chrome” as part of the user experience. This used to be very limiting when it came to functionality but various revisions have allowed for a similar kind of functionality to regular apps.

Dropbox native client view for Windows 8 Desktop

Dropbox native client view for Windows 8 Desktop- taking advantage of what Windows Explorer offers

A variant on this theme is a “Web app” which provides a user interface without the Web browser’s interface elements. But the Web browser works as an interpreter between the online service and the user interface although the user doesn’t see it as that. It is appealing as an approach to writing online service clients due to the idea of “write once run anywhere”.

Another common approach is to write an app that is native to a particular computing platform and operating system. These apps, which I describe as “native clients” are totally optimised in performance and functionality for that computing platform. This is because there isn’t the need for overhead associated with a Web browser needing to interpret code associated with a Web page. As well, the software developer can take advantage of what the computing platform and operating system offers even before the Web browser developers build support for the function in to their products.

There are some good examples of online-service native clients having an advantage over Web apps or Web pages. One of these is messaging and communications software. Here, a user may want to use an instant-messaging program to communicate with their friends or colleagues while using graphics software or games which place demands on the computer. Here, a native instant-messaging client can run alongside the high-demand program without the overhead associated with a Web browser.

The same situation can apply to online games where players can see a perceived improvement in their performance. As well, it is easier for the software developer to write them to take advantage of higher-performance processing silicon. It includes designing an online game for a portable computing platform that optimises itself for either external power or battery power.

This brings me to native-client apps that are designed for a particular computing platform from the outset. One key application is to provide a user interface that is optimised for “lean-hack” operation, something that is demanded of anything that is about TV and video content. The goal often is to support a large screen viewed at a distance along with the user navigating the interface using a remote control that has a D-pad and, perhaps a numeric keypad. The remote control describes the primary kind of user interface that most smart TVs and set-top boxes offer.

Another example is software that is written to work with online file-storage services. Here, a native client for these services can be written to expose your files at the online file-storage service as if it is another file collection similar to your computer’s hard disk or removeable medium.

Let’s not forget that native-client apps can be designed to make best use of application-specific peripherals due to them directly working with the operating system. This can work well with setups that demand the use of application-specific peripherals, including the so-called “app-cessory” setups for various devices.

When an online platform is being developed, the client software developers shouldn’t forget the idea of creating native client software that works tightly with the host computer platform.

Send to Kindle

Why I support multiple accounts with online media endpoints at home?

Apple TV 4th Generation press picture courtesy of Apple

The Apple TV set-top box – an example of a popular online-media platform

It is so easy to think of the idea of one person associated with an account-based online media service that is run on a commonly-used online media device. The classic example of this is a smart TV or set-top box that is installed in the main living room. It also extends to smart speakers, Internet radios and network-capable audio setups that work with various online audio content services.

There is a reality that many adults will end up using the same device like the aforementioned smart TV. But a lot of online-media services like Netflix, the broadcast video-on-demand services run by the free-to-air TV broadcasters or online audio services implement user-account-driven operation so customers benefit from their subscription or user-experience personalisation like “favourite shows” lists. With these smart TVs or similar devices, you can only associate the device with one user account for each of these services. This assumes that one person owns and operates the device.

Dish Joey 4K set-top box press picture courtesy of Dish Networks America

Set-top boxes connected to TVs in common areas are used by many people

It is although Apple has started work with having one Apple TV device work with multiple Apple ID user accounts, leading towards concurrent operation of these accounts in tvOS 13. But, at the moment, this only works with Apple-provided online services that are bound to end-users’ Apple IDs.

This reality is driven by the rise in multi-generational households with adult children living under the same roof as their parents. That has come about due to strong financial pressures on young people driven by costly housing in major cities, whether owned or rented. It goes along with that long-time adult reality of maintaining personal relationships under the same roof, while other adults end up staying at the home of another person they are friendly with as a temporary measure. As well, younger adults are increasingly living in share-houses in order to split their living costs easily amongst each other.

Dell Inspiron 14 5000 2-in-1 - viewer arrangement at Rydges Melbourne (Locanda)

An online media account set up on a laptop, tablet or smartphone is typically set up for one user having exclusive use of that device

But a significant number of the accounts for the various online-media services are established on computing devices that are primarily or exclusively used by a single adult. Then a person may decide to register their online-media service account on a commonly-used online-media device to use their subscription or customisations there.

The problem that easily happens is that other people cannot operate their accounts for the same service on that same device thus losing the benefit of their customisations being valid at that device. Or if they do so, they have to complete a rigmarole of logging others out before they log in, with some services having a login procedure requiring you to enter usernames and passwords on the media device using that dreaded “pick-and-choose” method even if the service was set up using social sign-in.

What does the single account problem affect?

Netflix menu screen - favourites

Shows you have marked as “favourite” for your profile in your Netflix account

The situation can also affect the account that is associated with the commonly-used device in a number of ways. This is more so with the content recommendation engines that most online media services implement which help in the discovery of new content that may be of interest. The behaviour of these engines manifests in the form of a “recommended content” playlist that appears on the service’s homepage, the customer email that is sent out to each of the service’s customers with a list of recommended content or a content suggestion that appears at the end of content you were engaging with.

SBS On Demand - favourites screenshot

Another example of shows you have marked as favourite – this time on SBS On Demand

Here, you may have “steered” SBS On Demand’s content recommendation engine to bring up European thrillers due to you watching these shows. But someone else comes in with a penchant for, perhaps, Indian Bollywood content. They binge on episodes of this content and you end up with the recommended-content list diluted with Indian content.

SBS On Demand - recommendations screenshot

The recommended-content playlist like this one can be diluted when there is one account shared by many with different tastes like with SBS On Demand

Another area where this will affect is the list of favourite shows or currently-viewing series that these services keep. Here, you use these lists to identify where you are up to in a show or series you are viewing. Similarly, your member email may alert you to new seasons of your favourite series or if the show is to be removed from the service. But if you started working through a show or series on a computing device you exclusively use but want to continue it on the large-screen TV bound to someone else’s account, you won’t be able to do so unless you log in with your account to continue your viewing there.

In the same context, it doesn’t permit a user who is enjoying the content on the account associated with the commonly-used device to another device associated with their own account. This may be of concern if, for example, you commenced viewing of an episode of a binge-worthy series on the main TV in the house’s main living area but had to continue it on your 2-in-1 laptop in your bedroom because someone else wants to do something else.

Common workarounds

Using a setup like AirPlay, Chromecast or hard-wired connectivity to link your own computing device to the large-screen TV may be seen as a workaround for access to your account even if the set or main set-top device is associated with another account.

But this can yield problems like mobile devices not yielding a best-quality picture with a hard-wired connection or the existence of an Apple TV, Chromecast, Android TV setup or appropriate cable that is connected to the TV you want to use. Let alone it not being feasible to carry that desktop computer of yours around to the main TV to watch that Netflix show there using your account and its customisations. Or your smartphone or tablet going to sleep and interrupting your viewing due to it taking battery-conservation measures or simply running out of battery power.

You may find that connecting multiple set-top boxes or similar devices to the main TV with each one bound to different accounts may exist as another workaround. This is typically demonstrated by the use of a games console bound to its owner’s online media service accounts connected to a Smart TV that is bound to someone else’s online-media-service accounts.

But this can look very ugly, become less useable and you may not have enough HDMI ports on your TV or audio peripherals (soundbar, AV receiver) to cater for each set-top device bound to each individual household member’s accounts. It is made worse by most TVs having up to 3 HDMI inputs and most popularly-priced audio peripherals only having the one HDMI-ARC connection to the TV.

What can be done?

An online media service that works through a particular online media endpoint device could support multiple logins with the number being this side of 10.

Here, you could have an option to add or delete extra accounts to the online media-service interface as if you are managing your own account on that interface. The authentication process for adding accounts would be the same as for your own account, whether through supplying a username and password or transcribing an on-screen number in to the Website or mobile app for that service to enrol a limited-interface device.

A question that will come up is whether to have the accounts concurrently operating with the device exposing the customisations associated with each account on the same interface; or require the end-users to switch accounts for exclusive operation when they want to use their account.

Concurrent operation may be considered of relevance to, for example, a couple who watching their shows with each other whereas exclusive operation may come in to its own with an adult who watches their shows by themselves. This can also help with building out content recommendations or the online-media service keeping track of the popularity of a particular piece of content including how it is enjoyed.

What features can this add to online media consumption?

One feature would be the ability to easily enjoy the same content across different devices associated with your account, no matter whether they are exclusive to your account or not. This would benefit where you are working through the same content in different locations like hearing a playlist from that online music service in the car, or at home on the hi-fi; or watching that series on an iPad while you come home from work on the train then continuing it on the TV in the main lounge area at home.

Concurrent operation could also allow for an amalgamated content-choice experience, perhaps with separate menus or playlists for each person. It can extend to providing a list of common favourites or content recommendations that appeal “across the board”.

You also make sure that the content recommendations offered by the online media service reflect your content-consumption habits rather than be diluted by someone else’s choices. This is more so for music or video content that you enjoy and you want to discover similar content.

In some cases, you could have the ability to have the content-recommendation engine come up with content that appeals to the tastes represented by a group of accounts like a household rather than just one account. Such recommendations could be listed alongside account-specific recommendations lists.

Conclusion

What needs to be considered as the rise of online multimedia consumption occurs is the ability for multiple online media-service accounts to be used for the same service on the same device. This means that these services can work well with the reality of multiple-adult households such as couples or multi-generation households.

It then means that the service is personalised to each end-user’s tastes and the content recommendation system in these services reflects what they watch.

Send to Kindle

The generation gap has its own digital divide

Article

Old lady making a video call at the dinner table press picture courtesy of NBNCo

Making the idea of new technologies familiar to the older generation needs to be done at a pace they are comfortable with.

Smartphones creating generational and income divide | ABC News

Broadcast

“Living On The Wrong Side Of The ‘Digital Divide’” – ABC Radio AM (Monday 18 November 2019 8:00am)
MP3 audio file

My Comments

The recent ABC Australia Talks National Survey looked at a wide range of issues including climate change and social respect. But one of the issues that is relevant to this Website is the issue of the digital divide.

This highlights how pervasive personal computing including use of online services and mobile computing devices like smartphones is amongst certain parts of the community. It also includes the issue of missing out on essential services due to not being able to or comfortable about using them the online way.

As it is often discovered, the digital divide potently affects the older generations but also affects people based on income due to access to the necessary technology. What has never been called out was whether the kind of work people do during their working life had affected their risk of falling victim to the digital divide.

The ubiquity of new technologies through one’s productive life

Dell XPS 13 2-in-1 Ultrabook at Rydges Melbourne

It has taken a significant amount of time through the 1980s to the 2000s for the computer to be a mainstream tool in one’s productive life

A key factor that I have noticed is how much of one’s school or working life was spent using the technology and how quickly one adapted to newer technologies as they came in to being through their productive life. This is underscored through the evolution of personal computing technology including networked and online computing that took place through the 1980s to the 2000s. As well, the 2000s and 2010s brought about the idea of mobile personal computing thanks to smartphones and tablet computers used both personally and in the workplace.

The workplace

It became more potent in the workplace through the 1990s as computing power moved to the desktop in an affordable form and this brought about the change from the manual office that is based around typewriters, ledger books and card files to the highly-computerised office based around on-site desktop and server computers which handled word-processing, record-keeping and accounting tasks using computer software.

After the Australia Talks show, I had a follow-up conversation with an older man who worked in an office that was facing this transition revealed that his workplace’s accountant took a long time to adapt his accountancy skills to the highly-computerised office. Here, it was about being able to use the computer software to maintain an electronic equivalent of the ledger books.

Let’s not forget that a significant amount of blue-collar or public-facing work didn’t involve the use of computer technology. In some workplaces, specially-employed staff would collect or provide data to the blue-collar or public-facing workers as they needed it. If the worker ran their own business, they would be performing most of the record-keeping for their business the manual way.

The school

The same situation can be underscored through one’s school life before the 1990s where personal computing tended to play second fiddle. This would be about practices like a separate computer-studies lesson that would be taught in a computer lab equipped with personal computers. Other subjects typically required students to present their work in a handwritten form while teachers engaged in “talk and chalk” presentation techniques for most of their classes.

As well, investment in computer education tended to depend upon the whims of the powers that be that oversee the school, whether they be the school board or the government-based or church-based education authority that the school was accountable to. In some cases, it required private enterprise to “chip in” to provide the necessary capital for this kind of study.

After the 1990s, most schools encouraged the use of office software like word-processors, spreadsheets and presentation software as a teaching tool. This is along with the idea of seeing the computer as a tool to facilitate integrated learning and online research.

Being able to adapt to newer technologies

Let’s not forget that different people adapt to the newer technologies at a different pace. It includes whether they will deploy and encourage the use of that new technology to an area that they have oversight of and with what level of enthusiasm. This may be driven by factors including confidence, financial abilities or whether there is much support for that technology.

Some of use may be more keen in implementing newer technologies in our personal and productive lives, such as being “early adopters” of newer technology. Here, they will typically be interested in what new technologies come their way such as through reading journals, magazines and other material on that subject. This is typically said of younger people who tend to adapt very easily to newer social and other trends.

Similarly, some users have demonstrably wanted to adapt to the newer technologies using various methods such as class-based or self-paced training; and / or playing video games based on the new platform in order to make themselves comfortable with it.

Then there are others who will take a slower approach towards handling anything that is new. This may be due to avoiding being stuck with a trend that may be a “rooster one day, feather duster the next”. Similarly, it may be about waiting for the technology to become mature and affordable for most people before they take it up. Some users may even be very skeptical of any new trend that comes along especially if it is pushed on them. This is often said of mature or older people who are more settled in their ways regarding life.

Conclusion

There are many different reasons why there will be a generation-specific digital divide with the core one being about how much exposure they have to newer information technology through their school and working life.

To assist with this digital divide, there will need to be people within their family and community who can help them understand and utilise the various newer technologies that come their way. It can also be through the use of courses and other computer-literacy education tools that are pitched to this generation and its IT needs.

Send to Kindle

What is infrastructure-level competition and why have it?

Fibre optic cable trench in village lane - press picture courtesy of Gigaclear

Gigaclear underscores the value of infrastructure-level competition

An issue that will be worth raising regarding the quality of service for newer high-speed fixed-line broadband services is the existence of infrastructure-level competition.

When we talk of infrastructure for a fixed-line Internet service, we are talking of copper and/or fibre-optic cabling used to take this service around a neighbourhood to each of the customers’ premises.

Then each premises has a modem of some sort, that in a lot of cases is integrated in the router, which converts the data to a form that makes it available across its network. A significant number of these infrastructure providers will supply the modem especially if they cannot provide a “wires-only” or “bring your own modem” service due to the technology they are implementing and, in a lot of these cases, will legally own the modem.

In Europe, Australia and some other countries, this broadband infrastructure is provided by an incumbent telco or an infrastructure provider and multiple retail-level telecommunications and Internet providers lease capacity on this infrastructure to provide their services to the end-user. This is compared to North America where an infrastructure provider exclusively provides their own retail-level telecommunications and Internet services to end users via their infrastructure.

In a lot of cases where multiple retail telecommunications and Internet providers use the same infrastructure, the incumbent telco may be required to divest themselves of their fixed-line infrastructure to a separate privately-owned or government-owned corporation in order to satisfy a competitive-service requirement. This means that they cannot provide a retail Internet or telecommunications service over that infrastructure at a cost advantage over competitors offering the same service over the same infrastructure. Examples of this include Openreach in the UK, NBN in Australia and Chorus in New Zealand.

A problem with having a dominant infrastructure provider is that there is a strong risk of this provider  offering to retail telecommunications providers and their end-users poor value for money when it comes to telecommunications and Internet services. It also can include this provider engaging in “redlining” which is the practice of providing substandard infrastructure or refusing to provide any infrastructure to neighbourhoods that they don’t perceive as being profitable like those that are rural or disadvantaged.

Some markets like the UK and France implement or encourage infrastructure-level competition where one or more other entities can lay their own infrastructure within urban or rural neighbourhoods. Then they can either run their own telecommunications and Internet services or lease the bandwidth to other companies who want to provide their own services.

Infrastructure-level competition

Where infrastructure-level competition exists, there are at least two different providers who provide street-based infrastructure for telecommunications and Internet service. The providers may run their own end-user telecommunications and Internet services using this infrastructure and/or they simply lease the bandwidth provided via this infrastructure to other retail Internet providers to provide these services to their customers.

Some competitors buy and use whatever “dark fibre” that exists from other previous fibre-optic installations to provide this service. Or they provide an enterprise communications infrastructure for government or big business in a neighbourhood but use dark fibre or underutilised fibre capacity from this job for offering infrastructure-level competition in that area.

As well, larger infrastructure operators who pass many premises in a market may be required to open up their infrastructure to telcos and Internet service providers that compete with their retail offering. This is something that ends up as a requirement for a highly-competitive telecommunications environment.

This kind of competition allows a retail-level telco or ISP to choose infrastructure for their service that offers them best value for money. This is more important for those retail-level ISPs and telcos who offer telecommunications and Internet to households and small businesses. As well, whenever a geographic area like a rural neighbourhood or new development is being prepared for high-speed broadband Internet, it means that the competing infrastructure providers are able to offer improved-value contracts for the provision of this service in that area.

Infrastructure-level competition also allows for the retail-level providers to innovate in providing their services without needing to risk much money in their provision. It can allow for niche providers such as high-performance gaming-focused ISPs or telcos that offer triple-play services to particular communities.

There is also an incentive amongst infrastructure providers to improve their customer service and serve neighbourhoods that wouldn’t otherwise be served. It is thanks to the risk of retail ISPs or their customers jumping to competitors if the infrastructure provider doesn’t “cut the mustard” in this field. As well, public spending on broadband access provision benefits due to the competition for infrastructure tenders for these projects.

What needs to happen

Build-over conditions

An issue commonly raised by independent infrastructure providers who are the first to wire-up a neighbourhood is the time they have exclusive access to that market. It is raised primarily in the UK by those independent infrastructure providers like Gigaclear or community infrastructure co-operatives like B4RN who have engaged in wiring up a rural community with next-generation fibre-optic broadband whether out of their pocket or with financial assistance from local government or local chambers of commerce.

This is more so where an established high-profile infrastructure provider that has big-name retail Internet providers on its books hasn’t wired-up that neighbourhood yet or is providing a service of lower capability compared to the independent provider who appeared first. For these independent operators, it is about making sure that they have a strong profile in that neighbourhood during their period of exclusivity.

Then, when the established infrastructure provider offers an Internet service of similar or better standard to the independent provider, the situation is described as a “build-over” condition. It then leads to the independent provider becoming a infrastructure-level competitor against the established provider which may impinge on cost recovery as far as the independent’s infrastructure is concerned. Questions that will come up include whether the independent operator should be compensated for loss of exclusivity in the neighbourhood, or allowing a retail ISP or telco who used the independent’s infrastructure to offer their service on the newcomer’s infrastructure.

Pits, Poles and Pipes

Another issue that will be raised is the matter of the physical infrastructure that houses the cable or fibre-optic wiring i.e. the pits, poles and pipes. These may be installed and owned by the telecommunications infrastructure provider for their own infrastructure or they may be installed and owned by a third-party operator like a utility or local council.

The first issue that can be raised is whether an infrastructure provider has exclusive access to particular physical infrastructure and whether they have to release the access to this infrastructure to competing providers. It doesn’t matter whether the infrastructure provider has their own physical infrastructure or gains access rights to physical infrastructure provided by someone else like a local government or utility company.

The second issue that also can crop up is access to public thoroughfares and private property to install and maintain infrastructure. This relates to legal access powers that government departments in charge of the jurisdiction’s regulated thoroughfares like roads and rails may provide to the infrastructure provider; or the wayleaves and easements negotiated between property owners and the infrastructure provider. In the context of competitive service, this may be about whether or not an easement, for example, is exclusive to a particular infrastructure provider.

Sustainable competition

Then there is the issue of sustainable competition within the area. This is where the competitors and the incumbent operator can make money by providing infrastructure-level Internet service yet the end-users have the benefits of a highly-competitive market. A market with too much competition can easily end up with premature consolidation where various retail or infrastructure providers cease to exist or end up merging.

Typically the number of operators that can sustainably compete may he assessed on the neighbourhood’s adult population count or the number of households and businesses within the neighbourhood. Also it can be assessed on the number of households and businesses that are actually taking up the broadband services or likely to do so in that neighbourhood.

Retail providers having access to multiple infrastructure providers

An issue that will affect retail-level telcos and ISPs is whether they have access to only one infrastructure operator or can benefit from access to multiple operators. This may be an issue where the infrastructure operators differ in attributes like maximum bandwidth or footprint and a major retail-level operator want to benefit from these different attributes.

In one of these situations, a retail-level broadband provider who wants to touch as many markets as possible may use one infrastructure provider for areas served by that provider. Then they use other providers that serve other areas that their preferred infrastructure provider doesn’t touch yet. This may also apply if they want to offer service plans with a particular specification offered by an infrastructure provider answering that specification but competing with the infrastructure provider they normally use.

Multiple-premises developments

Then there is the issue of multiple-premises buildings and developments where there is a desire to provide this level of service competition for the occupants but offer it in a cost-effective manner.

This may be answered by each infrastructure provider running their own wiring through the building but this approach leads to multiple wires and points installed at each premises. On the other hand, an infrastructure cable of a particular kind could be wired through the building and linked using switching / virtual-network technology to different street-side infrastructures. This could be based on cable technology like VDSL, Ethernet or fibre-optic so that infrastructure providers who use a particular technology for in-building provision use the infrastructure relating to that technology.

Estate-type developments with multiple buildings may have questions raised about them. Here, it may be about whether the infrastructure is to be provided and managed on a building-level basis or a development-wide basis. This can be more so where the multiple-building development is to be managed during its lifetime as though it is one entity comprising of many buildings.

Then there is the issue of whether the governing body of a multiple-premises development should be required to prevent infrastructure-provider exclusivity. This can crop up where an infrastructure provider or ISP pays the building manager or governing body of one of these developments to maintain infrastructure exclusivity perhaps by satisfying the governing body’s Internet needs for free for example.

In all of these cases, it would be about making sure that each premises in a multiple-premises development is able to gain access to the benefits of infrastructure-level competition.

Conclusion

The idea of infrastructure-level competition for broadband Internet is to be considered of importance as a way to hold dominant infrastructure providers to account. Similarly, it can be seen as a way to push proper broadband Internet service in to underserved areas whether with or without public money.

Send to Kindle

Indie games like Untitled Goose Game appeal to people outside the usual game demographics

Articles

Honk if you’ve got a hit: Melbourne-made “horrible goose” game goes global | The Age

Everyone from Chrissy Teigen to Blink-182 is freaking out about a ‘goose game’ — one look at the bizarre new game explains why | Business Insider

Untitled Goose Game Melbourne-based creators stunned after topping Nintendo charts | ABC News Australia

From the horse’s mouth

Untitled Goose Game (product page)

Video – Click or tap to play

Previous coverage on indie games

How about encouraging computer and video games development in Europe, Oceania and other areas

Alaskan fables now celebrated as video games

Two ways to put indie games on the map

My Comments

What is being realised now is that independently-developed electronic games are appealing to a larger audience than most of those developed by the mainstream games studios.

A case to point that has appeared very recently is Untitled Goose Game. This game; available for Windows or MacOS regular computers via the Epic Games Store, and the Nintendo Switch handheld games console via its app store, is about you controlling a naughty goose as you have it wreak havoc around an English rural village.

Here, it uses cartoon imagery and slapstick-style comic approach of the kind associated with Charlie Chaplin or Laurel and Hardy in the early days of cinema to provide amusement that appeals across the board. It also underscores concepts that aren’t readily explored in the video games mainstream.

This game was developed by a small North Fitzroy game studio called House House and had been underpinned by funds from the state government’s culture ministry (Film Victoria) before it was published by an independent games publisher called Panic.

A close friend of mine who is a 70-something-year-old woman was having a conversation with me yesterday about this game and we remarked on it being outside the norm for video games as far as themes go. I also noticed that her interest in this game underscored its reach beyond the usual video-game audience where it would appeal to women and mature-to-older-age adults, with her considering it as a possible guilty pleasure once I mentioned where it’s available on.

With Untitled Goose Game being successful on the Nintendo Switch handheld games console, it could be a chance for Panic or House House to see the game being ported to mobile platforms. This is more for benefit to those of us who are more likely to use an iPad or Android tablet to play “guilty-pleasure” games. This is in addition to optimising the game’s user interface for the Windows variant to also work with touchscreens so it can be played on 2-in-1 laptops.

What is happening is that there is an effort amongst indie games developers and publishers to make their games appeal to a wide audience including those of us who don’t regularly play video games.

Send to Kindle

Make VPN, VLAN and VoIP applications easy to set up in your network

Draytek Vigor 2860N VDSL2 business VPN-endpoint router press image courtesy of Draytek UK

Routers like the Draytek Vigor 2600N which support VPN endpoint and IP-PBX functionality could benefit from simplified configuration processes for these functions

Increasingly, the virtual private network, virtual local-area network and IP-based voice and video telephony setups are becoming more common as part of ordinary computing.

The VPN is being seen as a tool to protect our personal privacy or to avoid content-blocking regimes imposed by nations or other entities. Some people even use this as a way to gain access to video content available in other territories that wouldn’t be normally available in their home territory. But VPNs are also seen by business users and advanced computer users as a way to achieve a tie-line between two or more networks.

The VLAN is becoming of interest to householders as they sign up to multiple-play Internet services with at least TV, telephony and Internet service. Some of the telcos and ISPs are using the VLAN as a way to assure end-users of high quality-of-service for voice or video-based calls and TV content made available through these services.

AVM FRITZ!Box 3490 - Press photo courtesy AVM

… as could the AVM Fritz!Box routers with DECT base station functionality

It may also have some appeal with some multiple-premises developments as a tool to provide the premises occupiers access to development-wide network resources through the occupiers’ own networks. It will also appeal to public-access-network applications which share the same physical infrastructure as private networks such as FON-type community networks including what Telstra and BT are running.

VoIP and similar IP-based telecommunications technologies will become very common for home and small-business applications. This is driven by incumbent and competing telecommunications providers moving towards IP-based setups thanks to factors like IP-driven infrastructure or a very low cost-of-entry. It also includes the desire to integrate entryphone systems that are part of multi-premises buildings in to IP-based telecommunications setups including the voice-driven home assistants or IP-PBX business-telephony setups.

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

A device like the Amazon Echo could be made in to a VoIP telephone through an easy-to-configure Alexa Skill

In the same context, an operating-system or other software developer may want to design a “softphone” for IP-based telephony in order to have it run on a common computing platform.

What is frustrating these technologies?

One key point that makes these technologies awkward to implement is the configuration interface associated with the various devices that benefit from these technologies like VPN endpoint routers or IP-based telephony equipment. The same situation also applies if you intend to implement the setup with multiple devices especially where different platforms or user interfaces are involved.

This kind of configuration also increases the chance of user error taking place during the process which then leads to the setup failing with the user wasting time on troubleshooting procedures to get it to work. It also makes the setup process very daunting for people who don’t have much in the way of IT skills.

For example, you have to complete many steps to enrol the typical VPN endpoint router with a consumer-facing privacy-focused VPN in order to assure network-wide access to these VPNs. This involves transcribing configuration details for one of these VPNs to the router’s Web-based management interface. The same thing also applies if you want to create a VPN-based data tie-line between networks installed at two different premises.

Similarly, IP-based telephony is very difficult to configure with customers opting for pre-configured IP telephone equipment. Then it frustrates the idea of allowing a customer to purchase equipment or software from different resellers thanks to the difficult configuration process. Even small businesses face this same difficult whether it is to add, move or remove extensions, create inter-premises tie-lines or add extra trunk lines to increase call capacity or provide “local-number” access.

This limits various forms of innovation in this space such as integrating a building’s entryphone system into one’s own telephone setup or allowing Skype, Facebook Messenger, WhatsApp or Viber to permit a business to have a virtual telephone link to their IP-telephony platforms.

It also limits the wide availability to consumers and small businesses of “open” network hardware that can answer these functions. This is more so with VPN-endpoint routers or routers that have IP-based telecommunications functionality which would benefit from this kind of simplified configuration process.

What can be done?

A core requirement to enable simplified provisioning of these technologies is to make use of an XML-based standard configuration file that contains all of the necessary configuration information.

It can be transferred through a download from a known URL link or a file that is uploaded from your computing device’s local file system. The latter approach can also apply to using removable storage to transfer the file between devices if they have an SD-card slot or USB port.

Where security is important or the application depends on encryption for its operation, the necessary binary public-key files and certificates could be in a standard form with the ability to have them available through a URL link or local file transfer. It also extends to using technologies based around these public keys to protect and authenticate the configuration data in transit or apply a digital signature or watermark on the configuration files to assert their provenance.

I would also see as being important that this XML-based configuration file approach work with polished provisioning interfaces. These graphically-rich user interfaces, typically associated with consumer-facing service providers, implement subscription and provisioning through the one workflow and are designed to he user-friendly. It also applies to achieving a “plug-and-play” onboarding routine for new devices where there is a requirement for very little user interaction during the configuration and provisioning phase.

This can be facilitated through the use of device-discovery and management protocols like UPnP or WSD with the ability to facilitate the upload of configuration files to the correct devices. Or it could allow the creation and storage of the necessary XML files on the user’s computer’s local storage for the user to upload to the devices they want to configure.

Another factor is to identify how a device should react under certain situations like a VPN endpoint router being configured for two or more VPNs that are expected to run concurrently. It also includes allowing a device to support special functions, something common in the IP-based telecommunications space where it is desirable to map particular buttons, keypad shortcodes or voice commands to dial particular numbers or activate particular functions like door-release or emergency hotline access.

Similarly, the use of “friendly” naming as part of the setup process for VLANs, VPNs and devices or lines in an IP-telephony system could make the setup and configuration easier. This is important when it comes to revising a configuration to suit newer needs or simply understanding the setup you are implementing.

Conclusion

Using XML-based standard provisioning files and common data-transfer procedures for setup of VLAN, VPN and IP-based-telecommunications setups can allow for a simplified setup and onboarding experience. It can also allow users to easily maintain their setups such as to bring new equipment on board or factor in changes to their service.

Send to Kindle

Ambient Computing–a new trend

Article

Smart speakers like the Google Home are the baseline for the new concept of ambient computing

Lenovo see smart displays as a foundation for ambient computing | PC World

My Comments

A trend that is appearing in our online life is “ambient computing” or “ubiquitous computing”. This is where the use of computing technology is effective part of our daily lives without us having to do something specific about it.

One driver that is facilitating it is the use of voice-driven assistant technology like Apple’s Siri, Amazon’s Alexa, Google’s Assistant or Microsoft’s Cortana. It has manifested initially in mobile operating systems like Android or iOS but has come about more so with smart speakers of the Amazon Echo, Google Home or Apple HomePod kind along with Microsoft and Apple putting this functionality in to desktop operating systems like MacOS and Windows.

Lenovo Smart Display press picture courtesy of Lenovo USA

as are smart displays of the Lenovo Smart Display kind

As well, Amazon and Google have licensed out front-end software for their voice-driven home assistants so that third-party equipment manufacturers can integrate this functionality in their consumer-electronics products. It also includes the availability of devices that connect to larger-screen TVs or higher-quality sound systems to use them as display or audio surfaces for these voice-driven assistants, even simply just to play audio or video content pulled up at the command of the user.

Lenovo underscored this with their current Smart Display products and the up-and-coming Smart Display products including a Lenovo Yoga Smart Tab which was premiered at IFA 2019 in Berlin. These are based on the Google Home platform and they were underscoring the role of these displays in ambient computing.

Another key driving factor is the Internet of Things which may be seen in the home context as lights, appliances and other devices connected to the home network and Internet. It doesn’t matter whether they connect to the IP-based home network directly or via a “home hub” device. These work with the various voice-driven home-assistant platforms as sensors or controlled devices or, in some cases, alternate control surfaces.

It extends beyond the home through interaction with various building-wide or city-wide services that relate to energy use, transport, personal security amongst other things.

The other key driver that is highlighted is the use of distributed computing or “the cloud” where the data is processed or presented in a manner that is made available via the Internet on any device. It can also include online services that present information or content at your fingertips from anywhere in the world. In some cases, there is the use of data aggregation to create a wider picture of what is going on.

What this all adds up to is the concept of an “information butler” that responds with information or content as you need it. This is underscored with ambient or ubiquitous computing that is not just a Silicon Valley buzzword but a real concept.

What does the concept of ambient or ubiquitous computing underscore?

Here it is the use of information technology in a manner that blends in with your lifestyle rather than being a separate activity. You interact with one or more of the endpoints while you undertake a regular daily task and this can be about showing up information you need or setting up the environment for that activity. It relies less on active participation by the end-user.

Ambient computing is adaptive in that it fits in and adapts to your changing needs. It is also anticipatory because it can anticipate future needs like, for example, changing the heating setting to cope with a change in the weather. It also demonstrates context awareness by recognising users and the context of their activity.

But ambient computing still has its issues. One key issue that is called out frequently is end-user privacy including protection of minor children when users interact with these systems. An article published by Intel underscores this in the context of simplifying the management of our privacy wishes with the various devices and online services through the use of “agent” software.

This also relates to data security for the infrastructure along with data sovereignty (which country the data resides in) due to issues like information theft and use of information by foreign governments.

Similarly, allowing ambient-computing environments to determine activities like what content you enjoy can be of concern. This is more important because you may choose particular content based on your values and what others who have similar tastes and values recommend. It can also lead to avoiding addiction to content that can be socially harmful or enforcing the consumption of a particular kind of content upon people at the expense of other content.

Another factor that can creep up if common data-interchange standards aren’t implemented is the existence of data “silos”. This is where an ambient computing environment is limited to hardware and software provided by particular vendors. It can limit competition in the provision of these services which can restrict the ability to innovate when it comes to developing these systems further.

But what is now being seen as important for our online life is the trend towards ubiquitous ambient computing that simply is part of our lives.

Send to Kindle