Current and Future Trends Archive

Amazon starts Voice Interoperability Initiative for voice-driven assistant technology

Articles

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Devices like Amazon Echo could support multiple voice assistants

Amazon Creates A Huge Alliance To Demand Voice Assistant Compatibility | The Verge

Amazon launches Voice Interoperability Initiative — without Google, Apple or Samsung | ZDNet

Amazon enlists 30 companies to improve how voice assistants work together | Engadget

From the horse’s mouth

Amazon

Voice Interoperability Initiative (Product Page)

Amazon and Leading Technology Companies Announce the Voice Interoperability Initiative (Press Release)

My Comments

Amazon have instigated the Voice Interoperability Initiative which, at the moment, allows a hardware or software device to work with multiple compatible voice-driven AI assistants. It also includes the ability for someone to develop a voice-driven assistant platform that can serve a niche yet have it run on commonly-available smart-speaker hardware alongside a broad-based voice-driven assistant platform.

Freebox Delta press photo courtesy of Iliad (Free.fr)

Freebox Delta as an example of a European voice-driven home assistant that could support multiple voice assistant platforms

An example they called out was to run the Salesforce Einstein voice-driven assistant that works with Salesforce’s customer-relationship-management software on the Amazon Echo smart speaker alongside the Alexa voice assistant. Similarly, a person who lives in France and is taking advantage of the highly-competitive telecommunications and Internet landscape there by buying the Freebox Delta smart speaker / router and have it use Free.fr’s voice assistant platform or Amazon Alexa on that same device.

Microsoft, BMW, Free.fr, Baidu, Bose, Harman and Sony are behind this initiative while Google, Apple and Samsung are definitely absent. This is most likely because Google, Apple and Samsung have their own broad-based voice-driven assistant platforms that are part of their hardware or operating-system platforms with Apple placing more emphasis on vertically-integrating some of their products. It is although Samsung’s Android phones are set up to be able to work with their Bixby voice assistant or Google’s Assistant service.

Intel and Qualcomm are also behind this effort by offering silicon that provides the power to effectively understand the different wake words and direct a session’s focus towards a particular voice assistant.

The same hardware device or software gateway can recognise assistant-specific wake words and react to them on a session-specific basis. There will be the ability to assure customer privacy through measures like encrypted tunnelling for each assistant session along with an effort to be power-efficient which is important for battery-operated devices.

Personally I see this as an ability for companies to place emphasis on niche voice-assistant platforms like what Salesforce is doing with their Einstein product or Microsoft with its refocused Cortana product.  It can even make the concept of these voice assistants more relevant to the enterprise market and business customers.

Similarly, telcos and ISPs could create their own voice-driven assistants for use by their customers, typically with functionality that answers what they want out of the telco’s offerings. It can also extend to the hotel and allied sectors that wants to use voice-driven assistants for providing access to functions of benefit to hotel guests like room service, facility booking and knowledge about the local area. Let’s not forget vehicle builders who implement voice-driven assistants as part of their infotainment technology so that the drive has both hands on the wheel and eyes on the road.

This kind of offering can open up a market for the creation of “white-label” voice-assistant platforms that can be “branded” by their customers. As well, some of these assistants can be developed with a focus towards a local market’s needs like high proficiency in a local language and support for local values.

For hardware, the Amazon Voice Interoperability Initiative can open up paths for innovative devices. This can lead towards ideas like automotive applications, smart TVs, build-in use cases like intercom / entryphone or thermostat setups, software-only assistant gateways that work with computers or telephone systems amongst other things.

With the Amazon Voice Interoperability Alliance, there will be increased room for innovation in the voice-driven assistant sector.

Send to Kindle

Wi-Fi 6 is here for certain

Articles

TP-Link Archer AX6000 Wi-Fi 6 broadband router product picture courtesy of TP-Link USA

TP-Link Archer AX6000 Wi-Fi 6 broadband router – an example of a Wi-Fi 6 router

Wi-Fi 6: Better, faster internet is coming — here’s what you need to know | CNet

Should You Upgrade to Wi-Fi 6? | PC Mag

Previous Coverage

New nonenclature for Wi-Fi wireless networks

What will 802.11ax Wi-Fi wireless networking be about?

From the horse’s mouth

Wi-Fi Alliance

Wi-Fi CERTIFIED 6™ delivers new Wi-Fi® era (Prress Release)

Wi-Fi CERTIFIED 6™ delivers new Wi-Fi® era {Product Page)

My Comments

The Wi-Fi Alliance have started this week to certify devices as to whether they are compliant to the new Wi-Fi 6 (802.11ax) wireless-network standard. This effectively means that this technology will be ready for prime time.

But what will it offer?

NETGEAR Orbi with Wi-Fi 6 press picture courtesy of NETGEAR

NETGEAR Orbi Wi-Fi 6 – the first distributed Wi-Fi setup with Wi-Fi 6 technology

Wi-Fi 6 will offer a theoretical data throughput of 10Gbps which is 30% faster than Wi-Fi 5 setups. There will also be the ability for one access point or route to support many Wi-Fi client devices at once thus preventing that device from being “oversubscribed” and underperforming when many devices come on board. It answers a common situation where a small network that is typically served by one Wi-Fi router ends up having to support multiple Wi-Fi client devices like laptops, smartphones, smart speakers of the Amazon Echo kind, and set-top devices for streaming video. It is facilitated through the use of a higher-capacity MU-MIMO technology.

In addition, the Wi-Fi 6 routers and access points implement OFDMA technology to share channels and use them efficiently. It will mean that multiple Wi-Fi 6 networks can coexist without underperforming which will be of benefit for apartment dwellers or trade shows and conferences where multiple Wi-Fi networks are expected to coexist.

There is also the targeted wake time feature to “schedule” use of a Wi-Fi 6 network by battery-operated devices. This will allow them to know when to send data updates to the network especially if they don’t change status often, which will benefit “Internet-of-Things” devices where there is the desire to run them for a long time on commodity batteries.

A requirement that will be placed on Wi-Fi 6 devices is to support WPA3 security for their network security standard. It is to improve the expectation upon these devices for a secure Wi-Fi network.

At the moment, routers and access points based on Wi-Fi 6 will be positioned at the premium end of the market and be typically targeted towards “be first with the latest” early adopters. But over the next year or two, the market will settle out with devices at more affordable price points.

Premium smartphones, tablets and laptops that are being redesigned from the ground up with new silicon will end up with Wi-Fi 6 network interface chipsets. This will apply to the Samsung Galaxy S10 family, computers based on Intel Ice Lake CPUs and the Apple iPhone 11 family. As well, some network-hardware vendors are offering add-on Wi-Fi 6 network adaptors that plug in to your laptop computer’s USB port to enable it for the new technology.

At the moment, if you are running a network with a Wi-Fi 5 access point or router that is serving devices based on Wi-Fi 4 (802.11n) and Wi-Fi 5 (802.11ac) technology, you don’t need to upgrade the access point or router yet.

But if you have to replace that device due to the existing unit dying or you intend to set up a new Wi-Fi network, it may be worth investigating the purchase of network infrastructure equipment based on Wi-Fi 6.

You will also find that each device will be provided with “best case” performance based on its technology. This means that if you install a Wi-Fi 6 access point or router on your network then subsequently sign a subsidised-equipment post-paid service contract for a smartphone with Wi-Fi 6 technology built in, the smartphone will work to Wi-Fi 6 levels while your laptop that supports Wi-Fi 5 technology works to that prior technology without impeding your smartphone’s Wi-Fi 6 functionality.

If you bought one of the earlier Wi-Fi 6 routers or distributed Wi-Fi setups which works to pre-certification standards, check your manufacturer’s site for any new firmware that will have the device working to the current specifications and upload it to your device.

Wi-Fi 6 wireless networks will become a major boon for evolving local-area networks towards higher capacity and faster throughput on wireless segments.

Send to Kindle

WindowsCentral has identified a handful of portable external graphics modules for your Thunderbolt 3 laptop

Article

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module press picture courtesy of Sonnet Systems

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables

Best Portable eGPUs in 2019 | WindowsCentral

From the horse’s mouth

Akitio

Node Pro (Product Page)

Gigabyte

Aorus Gaming Box (Product Page)

PowerColor

PowerColor Mini (Product Page)

Sonnet

Sonnet eGFX Breakaway Puck (Product Page)

My Comments

More of the Thunderbolt-3 external graphics modules are appearing on the scene but most of these units are primarily heavy units with plenty of connectivity on them. This is good if you wish to have this external graphics module as part of your main workspace / gaming space rather than something you will be likely to take with you as you travel with that Dell XPS 13 Ultrabook or MacBook Pro.

Dell XPS 13 9360 8th Generation clamshell Ultrabook

Dell XPS 13 9360 8th Generation clamshell Ultrabook – an example of an ultraportable computer that can benefit from one of the portable external graphics modules

Windows Central have called out a selection of these units that are particularly portable in design to allow for ease of transport. This will appeal to gamers and the like who have access to a large-screen TV in another room that they can plug video peripherals in to such as university students living in campus accommodation or a sharehouse. It can also appeal to those of us who want to use the laptop’s screen with a dedicated graphics processor such as to edit and render video footage they have captured or play a game with best video performance.

Most of the portable external graphics modules will be embedded with a particular graphics chipset and a known amount of display memory. In most cases this will be a high-end mobile GPU which may be considered low-spec by desktop (gaming-rig) standards. There will also be reduced connectivity options especially with the smaller units but they will have enough power output to power most Thunderbolt-3-equipped Ultrabooks.

An exception that the article called out was the Akitio Node Pro which is a “card cage” that is similar in size to one of the new low-profile desktop computers. This unit also has a handle and a Thunderbolt-3 downstream connection for other peripherals based on this standard. It would need an active DisplayPort-HDMI adaptor or a display card equipped with at least one HDMI port to connect to the typical large-screen TV set.

Most of the very small units or units positioned at the cheap end of the market would excel at 1080p (Full HD) graphics work. This would be realistic for most flatscreen TVs that are in use as secondary TVs or to use the laptop’s own screen if you stick the the advice to specify Full HD (1080p) as a way to conserve battery power on your laptop.

The exception in this roundup of portable external graphics modules was the AORUS Gaming Box which is kitted out with the NVIDIA GeForce GTX 1070 graphics chipset. This would be consided a high-performance unit.

Here, these portable external graphics modules are being identified as being something of use where you are likely to take them between locations but don’t mind compromising when it comes to functionality or capability.

It can also appeal to first-time buyers who don’t want to spend much on their first external graphics module to put a bit of “pep” in to their suitably-equipped laptop’s or all-in-one’s graphics performance. Then if you are thinking of using a better external graphics module, perhaps a “card-cage” variety that can work with high-performance “gaming-rig” or “desktop-workstation” cards, you can then keep one of these external graphics modules as something to use on the road for example.

Send to Kindle

Ambient Computing–a new trend

Article

Smart speakers like the Google Home are the baseline for the new concept of ambient computing

Lenovo see smart displays as a foundation for ambient computing | PC World

My Comments

A trend that is appearing in our online life is “ambient computing” or “ubiquitous computing”. This is where the use of computing technology is effective part of our daily lives without us having to do something specific about it.

One driver that is facilitating it is the use of voice-driven assistant technology like Apple’s Siri, Amazon’s Alexa, Google’s Assistant or Microsoft’s Cortana. It has manifested initially in mobile operating systems like Android or iOS but has come about more so with smart speakers of the Amazon Echo, Google Home or Apple HomePod kind along with Microsoft and Apple putting this functionality in to desktop operating systems like MacOS and Windows.

Lenovo Smart Display press picture courtesy of Lenovo USA

as are smart displays of the Lenovo Smart Display kind

As well, Amazon and Google have licensed out front-end software for their voice-driven home assistants so that third-party equipment manufacturers can integrate this functionality in their consumer-electronics products. It also includes the availability of devices that connect to larger-screen TVs or higher-quality sound systems to use them as display or audio surfaces for these voice-driven assistants, even simply just to play audio or video content pulled up at the command of the user.

Lenovo underscored this with their current Smart Display products and the up-and-coming Smart Display products including a Lenovo Yoga Smart Tab which was premiered at IFA 2019 in Berlin. These are based on the Google Home platform and they were underscoring the role of these displays in ambient computing.

Another key driving factor is the Internet of Things which may be seen in the home context as lights, appliances and other devices connected to the home network and Internet. It doesn’t matter whether they connect to the IP-based home network directly or via a “home hub” device. These work with the various voice-driven home-assistant platforms as sensors or controlled devices or, in some cases, alternate control surfaces.

It extends beyond the home through interaction with various building-wide or city-wide services that relate to energy use, transport, personal security amongst other things.

The other key driver that is highlighted is the use of distributed computing or “the cloud” where the data is processed or presented in a manner that is made available via the Internet on any device. It can also include online services that present information or content at your fingertips from anywhere in the world. In some cases, there is the use of data aggregation to create a wider picture of what is going on.

What this all adds up to is the concept of an “information butler” that responds with information or content as you need it. This is underscored with ambient or ubiquitous computing that is not just a Silicon Valley buzzword but a real concept.

What does the concept of ambient or ubiquitous computing underscore?

Here it is the use of information technology in a manner that blends in with your lifestyle rather than being a separate activity. You interact with one or more of the endpoints while you undertake a regular daily task and this can be about showing up information you need or setting up the environment for that activity. It relies less on active participation by the end-user.

Ambient computing is adaptive in that it fits in and adapts to your changing needs. It is also anticipatory because it can anticipate future needs like, for example, changing the heating setting to cope with a change in the weather. It also demonstrates context awareness by recognising users and the context of their activity.

But ambient computing still has its issues. One key issue that is called out frequently is end-user privacy including protection of minor children when users interact with these systems. An article published by Intel underscores this in the context of simplifying the management of our privacy wishes with the various devices and online services through the use of “agent” software.

This also relates to data security for the infrastructure along with data sovereignty (which country the data resides in) due to issues like information theft and use of information by foreign governments.

Similarly, allowing ambient-computing environments to determine activities like what content you enjoy can be of concern. This is more important because you may choose particular content based on your values and what others who have similar tastes and values recommend. It can also lead to avoiding addiction to content that can be socially harmful or enforcing the consumption of a particular kind of content upon people at the expense of other content.

Another factor that can creep up if common data-interchange standards aren’t implemented is the existence of data “silos”. This is where an ambient computing environment is limited to hardware and software provided by particular vendors. It can limit competition in the provision of these services which can restrict the ability to innovate when it comes to developing these systems further.

But what is now being seen as important for our online life is the trend towards ubiquitous ambient computing that simply is part of our lives.

Send to Kindle

Different kinds of cloud IT systems–what to be aware of

Apple iPad Pro 9.7 inch press picture courtesy of Apple

The iPad is seen as part of the cloud-based mobile computing idea that Silicon Valley promotes

Very often “cloud” is used as a Silicon-Valley-based buzzword when describing information-technology systems that have any sort of online data-handling abilities.

This is more so if the IT system is sold to the customer “as a service” where the customer pays a subscription to maintain use of the system. It also is used where the user’s data is stored at an online service with minimal data-processing and storage abilities at the user’s premises.

It is because small business users are being sold on these systems typically due to reduced capital expenditure or reduced involvement in maintaining the necessary software. It also allows the small business to be able to “think big” when it comes to their IT systems without paying a prince’s ransom.

What is strictly a cloud system

Single Server online system

Single Server online system

But, strictly speaking, a cloud-based system relies on multiple online locations to store and/or process data. Such a system would have multiple computers at multiple data centres processing or storing the data, whether in one geopolitical jurisdiction or many depending on the service contract.

This is compared to the single-server online IT system sold as a service that implements at least a Web-based “thin-client” where you work the data through a Web page and, perhaps, a mobile-platform native app to work your data on a smartphone or tablet. Typically, the data would be held on one system under the control of the service provider with this system existing at a data centre. It works in a similar vein to common Internet services like email or Web-hosting with the data held on a server provided by the Wehhost or ISP.

Hybrid cloud systems

Hybrid Cloud online system

Hybrid Cloud online system with primary data kept on premises

One type of cloud system is what could be best described as a “hybrid” system that works with data stored primarily on the user’s premises. This is typically to provide either a small private data cloud that replicates data across branches of a small business or to provide online and mobile functionality such as to allow you to manage the data on a Web page or native mobile-platform app anywhere around the world, or to provide messaging abilities through a mobile-messaging platform.

For example, a lot of NAS units are marketed as “cloud” NAS units but these devices keep the user’s data on their own storage media. Here, they use the “cloud” functionality to improve discovery of that device from the Internet when the user enables remote access functionality or data-syncing between two NAS devices via the Internet. It is due to the reality that most residential and some small-business Internet connections use outside IP addresses that change frequently.

WD MyCloud EX4100 NAS press image courtesy of Western Digital

WD MyCloud EX4100 NAS – one of the kind of NAS units that uses cloud functionality for online access

Or a small medical practice who keeps their data on-premises is sold a “cloud-based” messaging and self-service appointment-management add-on to their IT system. Here, the core data is based on what is held on-premises but the messaging functionality or Web-based user interface and necessary “hooks” enabling the mobile-platform native app for the self-service booking function are hosted on a cloud service built up by the add-on’s vendor. When a patient uses the mobile-platform app or Web-front to book or change an appointment, they alter the data on the on-premises system through the cloud-hosted service.

It may also be used with something like an on-premises accounting system to give business functionality like point-of-sale abilities to a mobile-platform device like an iPad through the use of a cloud-based framework. But the core data in the on-premises system is altered by the cloud-based mobile-platform setup as each transaction is completed.

Full-cloud systems

Full Cloud online system

Full Cloud online system with data processing and storage across multiple different computers

On the other hand, a full-cloud system has the user’s primary data held online across one or more server computers with minimum local hardware or software to work the user’s data. There may be some on-premises data-caching to support offline operation such as to provide transaction-capture if the link is down or simply to improve the system’s performance.

The IT infrastructure for a full-cloud system will have some measure of scalability to allow for an increasing customer base, typically with the service provider annexing more computer power as the customer base increases. Such a service will have tiered pricing where you pay more for increased capacity.

Client software types

The user-interface for an online or cloud IT system would primarily be Web-driven where you work the data with a Web browser. On the other hand, it could use native client software that works tightly with the client computer’s operating system whether as a “thick” client with a significant amount of local data-processing or storage on the endpoint computing device or a “thin” client which just has a window to the data such as simply using a Web browser.

Public vs private cloud

Another concept regarding cloud-based IT is the difference between a public cloud and a private cloud. The public cloud has the computing power managed by another firm like Microsoft Azure or Amazon Web Services while the private cloud has all its computing power managed by the service provider or client company and effectively isolated from public access through a separate private network.

This can be a regular server-grade computer installed at each of the business’s branches, described as an internal cloud, Or it can be multiple high-grade server computers installed at data centres managed by someone else but available exclusively for the business, known as a hosted private cloud.

Data Privacy, Security and Sovereignty

Another factor that comes in to question regarding cloud and online computing is the issue of data privacy, security and sovereignty.

This covers how the data is handled to assure privacy relating to end-users whom the data is about; and assurance of security over data confidential to the IT system’s customer and its end-users. It will call out issues like encryption of data “in transit” (while moved between systems) and “at rest” (while stored on the systems) along with policies and procedures regarding who has access to the data when and for what reason.

It is becoming a key issue with online services thanks to the European GDPR directive and similar laws being passed in other jurisdictions which are about protecting end-users’ privacy in a data-driven world.

The issue of data sovereignty includes who has effective legal control over the data created and managed by the end-user of the online service along with which geopolitical area’s rules the data is subject to. Some users pay attention to this thanks to countries like the continental-European countries who value end-user privacy and similar goals heavily.

There is also the issue of what happens to this data if the user wants to move to a service that suits their needs better or if the online service collapses or is taken over by another business.

Cloudlets, Fog Computing and Edge Computing

Edge Computing setup

Edge computing setup where local computing power is used for some of the data handling and storage

This leads me to the concept of “edge computing”, which uses terminology like “fog computing” or “cloudlets”. This involves computing devices relatively local to the data-creation or data-consumption endpoints that store or process data for the benefit of these endpoints.

An example can be about a small desktop NAS, especially a high-end unit, on a business premises that handles data coming in to or going out to a cloud-based online service from endpoint devices installed on that premises. Or it could be a server installed in the equipment rack at a telephone exchange that works as part of a content-delivery system for customers who live in the neighbourhood served by that exchange.

Qarnot Q.Rad press image courtesy of Qarnot

Qarnot Q.Rad room heater that is a server computer for edge-computing setups

Similarly, the Qarnot approach which uses servers that put their waste heat towards heating rooms or creating domestic hot water implements the principle of edge computing. Even the idea of a sensor drone or intelligent videosurveillance camera that processes the data it collects before it is uploaded to a cloud-based system is also about edge computing.

It is being touted due to the concept of decentralised data processing as a way to overcome throughput latency associated with the public Internet links. As well, this concept is being underscored with the Internet of Things as a way to quickly handle data created by sensors and turn it in to a form able to be used anywhere.

Conclusion

Here, the issue is for those of us who buy service-based IT whether for our own needs or for a workplace is to know what kind of system we are dealing with. This includes whether the data is to exist in multiple locations, at the premises or at one location.

Send to Kindle

Major improvements expected to come to Bluetooth audio

Article

Creative Labs Stage Air desktop soundbar press picture courtesy of Creative Corporation

The Bluetooth connectivity that the Creative Labs Stage Air desktop soundbar benefits from will be improved in an evolutionary way

The future of Bluetooth audio: Major changes coming later this year | Android Authority

My Comments

One of Bluetooth’s killer applications, especially for smartphones and tablets, is a wireless link between a headset, speaker or sound system to reproduce audio content held on the host computing device.

At the moment, the high-end for this use case is being fought strongly by some very determined companies. Firstly, Bose, Sony and Bang & Olufsen are competing with each other for the best active-noise-cancelling over-the-ear Bluetooth headset that you can use while travelling. This is while Apple and Sony are vying for top place when it comes to the “true-wireless” in-ear Bluetooth headset. It is showing that the Bluetooth wireless-audio feature is infact part of a desirable feature set for headphones intended to be used with smartphones, tablets or laptops.

Let’s not forget that recently-built cars and recently-made aftermarket car-stereo head units are equipped with Bluetooth for communications and multimedia audio content. This is part of assuring drivers can concentrate on the road while they are driving.

JBL E45BT Bluetooth wireless headset

.. just like headsets like this JBL one

But this technology is to evolve over the second half of 2019 with products based on the improved technology expected to appear realistically by mid 2020. Like with Bluetooth Low Energy and similar technologies, the host and accessory devices will be dual-mode devices that support current-generation and next-generation Bluetooth Audio. This will lead to backward compatibility and “best-case” operation for both classes of device.

There is an expectation that they will be offered at a price premium for early adopters but the provision of a single chipset for both modes could lead towards more affordable devices. A question that can easily be raised is whether the improvements offered by next-generation Bluetooth audio can be provided to current-generation Bluetooth hosts or accessory devices through a software upgrade especially where a software-defined architecture is in place.

What will it offer?

USB-C connector on Samsung Galaxy S8 Plus smartphone

… like with the upcoming generation of smartphones

The first major feature to be offered by next-generation Bluetooth audio technology is a Bluetooth-designed high-quality audio codec to repackage the audio content for transmission between the host and accessory.

This is intended to replace the need for a smartphone or headset to implement third-party audio codecs like aptX or LDAC if the goal is to assure sound quality that is CD-grade or better. It means that the device designers don’t need to end up licensing these codecs from third parties which will lead to higher-quality products at affordable prices along with removing the balkanisation associated with implementing the different codecs at source and endpoint.

A question that will be raised is what will be the maximum audio quality standard available to the new codec – whether this will be CD-quality sound working up to 16-bit 48kHz sampling rate or master-quality sound working up to 24-bit 192kHz sampling rate. Similarly, could these technologies be implemented in communications audio especially where wide-bandwidth FM-grade audio is being added to voice and video communications technologies for better voice quality and intelligibility thanks to wider bandwidth being available for this purpose.

Another key improvement that will be expected is reduced latency to a point where it isn’t noticeable. This will appeal to the gaming headset market where latency is important because sound effects within games are very important as audio cues for what is happening in a game. It may also be of benefit if you are making or taking videocalls and use your Bluetooth headset to converse with the caller. Here, it will open up the market for Bluetooth-based wireless gaming headsets.

It will also open up Bluetooth audio towards the “many-endpoint” sound-reproduction applications where multiple endpoints like headsets or speakers receive the same audio stream from the same audio source. In these use cases, you can’t have any endpoint receiving the program material reproducing the material later than others receiving the same material.

A key application that will come about is to implement Bluetooth in a multiple-channel speaker setup including a surround-sound setup. This will be a very critical application due to the requirement to reproduce each channel of the audio content stream concurrently and in phase.

It will also legitimise Bluetooth as an alternative wireless link to Wi-Fi wireless networks for multiroom audio setups. As well, the support for “many-endpoint” sound-reproduction will appeal to headsets and hearing-aid applications where there is the desire to send content to many of these devices using a high-quality wireless digital approach rather than RF or induction-loop setups that may be limited in sound quality (in the case of induction-loop setups) or device compatibility (in the case of RF setups). There could even be the ability to support multiple audio-content channels in this setup such as supporting alternative languages or audio description. In some cases, it may open up a use case where transport announcements heard in an airport or rail station can “punch through” over music, video or game sound-effects heard over a Bluetooth headset in a similar way to European car radios can be set up to allow traffic bulletins to override other audio sources.

A question that can be raised with the “many-endpoint” approach that this next-generation Bluetooth-audio technology is to support is whether this can support different connection topologies. This includes “daisy-chaining” speakers so that they are paired to each other for, perhaps a multi-channel setup; using a “hub-and-spoke” approach with multiple headsets or speakers connected to the same source endpoint; or a combination of both topologies including exploiting mesh abilities being introduced to Bluetooth.

Conclusion

From next year, as the newer generations of smartphones, laptops, headsets and other Bluetooth-audio-capable equipment are released, there will be a gradual improvement in the quality and utility of these devices’ audio functions.

Send to Kindle

Google to provide wireless across-the-room data transfer to Android

Article

USB-C connector on Samsung Galaxy S8 Plus smartphone

Google Fast Play could open up an improved point-to-point data transfer experience to Android smartphones

Google working on ‘Fast Share,’ Android Beam replacement and AirDrop competitor [Gallery] | 9To5Google.com

Fast Share is Google’s Android Beam replacement: Here’s what you should know | Android Authority

My Comments

Google is to provide as part of the Android platform a new “open-frame” point-to-point data-transfer solution. This solution, known as Fast Share, implements Bluetooth and peer-to-peer Wi-Fi to transfer text, pictures, Weblinks and other resources.

The Android platform had two different peer-to-peer data-transfer solutions previously. The first of these was the Bluetooth profile that was implemented by Symbian, Microsoft and others to transfer pictures, contact details and the like since the rise of the feature phone. The second of these was the Android Beam which used NFC “touch-and-go” as a discovery method and initially used Bluetooth but moved towards peer-to-peer Wi-Fi as a transfer method.

This was while Apple was using AirDrop across their ecosystem which included iPhones and iPads. In Apple’s true style, it was part of keeping as many users on the iOS platform and you couldn’t do things like transfer to other mobile or desktop platforms.

Google is intending to have Fast Share as part of their Play Services software package rather than being “baked in” to a particular version of the Android operating system. Here, Fast Share can be run with Android devices running older versions of the operating system which is a reality with a significant number of phones where the manufacturer won’t provide support for newer Android versions on particular models.

Advance images of this concept shown on the Web are underscoring a tentative plan to port it to their own ChromeOS and Apple’s iOS operating systems. If Microsoft and Apple are interested, it may be seen as a way for Windows or MacOS regular-computer users to share resources across the room on an ad-hoc basis. As well, Google could look at how Fast Share can be implemented in a “headless” form whether for sending or receiving the data.

You will have the ability to share file-based resources like photos, videos, PDFs or vCard-based contact-information files along with URLs pointing to Web-hosted resources or snippets of text. This will satisfy most usage requirements like sharing family snapshots, contact details or Weblinks.

There will be the option to give a sender “preferred visibility” status so they can discover your phone when you are near them. This status means that they will see your device if you aren’t running the Fast Share app. Of course, users can turn Fast Share on and off as required, preferably with the idea of turning it off when using the phone in a public place unless they expect to receive something. You also have the ability to decline or accept incoming files so you have some control over what you receive.

The core issue with Google Fast Share and similar point-to-point across-the-room file-transfer platforms is that they have to work in a truly cross-platform manner so you don’t have to worry whether your friend sitting in that armchair across from you is using an iPhone or Android device when you intend to send that photo to them or share your contact details.

Send to Kindle

6GHz Wi-Fi technology moving towards room-by-room Gigabit Wi-Fi

Article

NETGEAR Orbi distributed WiFi system press image courtesy of NETGEAR

Distributed Wi-Fi setups like this NETGEAR Orbi will be heading towards the Gigabit Wi-Fi goal on the 6GHz waveband

ARRIS: How 6 GHz Wi-Fi will revolutionise the connected home | Wi-Fi Now

My Comments

ARRIS who make home-network equipment for the American market, are pushing the idea that the 6 GHz Wi-Fi network is a major evolution for the home network.

This is coming about due to various national government departments who have oversight over radiocommunications use within their jurisdiction working on regulatory instruments to open up unlicensed low-power indoor use of the 6 GHz radio waveband. Such regulation is expected to be passed by the FCC in the US by mid-year 2020 and OFCOM in the UK by 2021 with other jurisdictions to follow suit over the next few years.

It will open up seven new 160MHz channels for the Wi-Fi 6 technology with the feasibility to open up a Gigabit Wi-Fi network. This is expected to lead to the evolution of the self-configuring distributed Wi-Fi setup with a Gigabit Wi-Fi backbone plus each access point offering a 160MHz Wi-Fi 6 channel alongside support for low-power narrower-bandwidth 2.4GHz and 5GHz channels for legacy equipment.

There will be the implementation of Wi-Fi EasyMesh and Wi-Fi EasyConnect standards to permit secure setup and an open-frame heterogenous distributed-wireless network.

One limitation I do see confronting this ideal that Arris put forward is the short-wavelength Wi-Fi backbone which can be a hindrance with certain building materials and construction approaches like double-brick walls. There will also be the requirement to run many access points to make sure the average home is covered properly. Here, the wired backbone whether “new wires”  like Ethernet or “no new wires” like HomePlug AV2 powerline or MoCA TV-antenna coaxial still has toe be considered for a multiple-access-point network.

ARRIS was even positioning for the evolution of the distributed Wi-Fi network to have each room with its own access-point node capable of yielding Gigabit bandwidth. They also put forward ideas like having these access points mounted on the ceiling. But I would also prefer the idea of a normally-sessile endpoint device like a network printer, Amazon-Echo-style smart speaker or a smart TV being its own access point that is part of the distributed Wi-Fi network. It then avoids the need to equip a room with an extra access point if you are intending to have this kind of device in that room.

The use of Wi-Fi 6 technologies will also be about working with environments that are congested as far as Wi-Fi wireless networking is concerned. These environments like multiple-premises buildings, airports or hotels are likely to have many Wi-Fi devices operating on many Wi-Fi networks which with prior technologies leads to poor performance especially on the throughput and latency side.

It may have to take a few years for the Wi-Fi wireless network to hit the Gigabit throughput mark as the 6 GHz band opens up and more access-point and client devices come on the market.

Send to Kindle

USB-C displays are coming in droves–what should you look for?

Article

Dell S2718D 27" slimline monitor press image courtesy of Dell

Dell’s slimline 27″ monitor with its electronics in its base is an example of a USB-C monitor

Best USB-C Monitors for PC in 2019 | Windows Central

My Comments

An increasing number of standalone display monitors are becoming equipped with the USB-C connection as a path for connecting your computer to them.

This connection works uses the DisplayPort alt path offered by the connection standard for video transfer from the host computer as a minimum feature. This is part of the USB-C standard that allows different host-peripheral connection paths like DisplayPort to be run via the same physical cable along with the USB-based host-peripheral data transfer. But most of these monitors will support being a power source compliant to the USB Power Delivery device class so they can provide power to and charge a laptop that is connected to them as a host. Better implementations of this standard will even support being their own powered USB 3.x hub and have two or three traditional USB 3.0 ports.

These USB-C plugs are now another connection path for linking your computer to a display monitor

There will be at least some HDMI or standard DisplayPort input connections for legacy setups such as desktop or laptop computers that don’t come with USB Type-C connections. But you can exploit the hub functionality in those monitors that implement it if you use a USB cable that has a Type C connector on one end and a traditional Type A connector on the other end.

If the monitor has any sort of audio functionality, this will be facilitated through the DisplayPort or HDMI connections. In the case of the USB-C setup, the sound will be transferred using the DisplayPort alt ability that this connection provides. Most of the monitors with this function will have a 3.5mm stereo audio-output jack that can work to headphone or line-out specifications and may have integral speakers.

You will need to have your computer use the “display audio” driver rather than its audio chipset to use the monitor’s audio abilities via the USB-C, DisplayPort or HDMI connections. As well, don’t expect much in sound quality from the integral speakers and it may be a better idea to use a set of good active speakers or your favourite stereo setup for the sound.

Like with monitors that don’t come with the USB-C connection, buying a USB-C monitor will be more of a “horses for courses” approach. Here you will come across 4K UHDTV screens with wide colour gamut and HDR support which will come in handy if you engage in photo or video editing. This is while there will be monitors optimised to work with the latest high-performance discrete display subsystems for those of us who like playing the latest high-end games.

Another question that will come up if your computer has a Thunderbolt 3 output is how these screens will fit in with external graphics modules that you may use. Most of these modules will require you to connect their video output to the monitor’s HDMI or DisplayPort connections as if you are connecting a legacy host computer but some may use a secondary Thunderbolt 3 / USB-C connection to allow you to connect your USB-C monitor with its video coming from the module’s graphics infrastructure.

Use Cases

One main use case would be for those of us who have a laptop-based working environment. Here, you would use a USB-C monitor with integrated hub functionality and connect your wired peripherals to the monitor while your laptop is connected to your monitor using one cable. You then end up dealing with just one cable when you bring your computer to or remove it from that workspace.

Another main use case is if you are dealing with a “next unit of computing” midget computer or other small-form-factor computer that implements this connection type. Where manufacturers see the USB-C connection type as a way to reduce the computer’s size, these monitors can earn their keep as a preferred display type for these systems.

Do I need to replace my existing monitor for one with a USB-C connection

At the moment, you don’t need to replace your existing monitor with one that has a USB-C connection if your existing monitor serves your needs well. This is more important for those of us who have existing computer equipment that isn’t equipped with this connection or aren’t buying equipment that will have this connection.

But if you are replacing an existing monitor with something that better suits your needs or adding one to a multiple-display setup, this connection type can be a valid feature to be aware of when comparing the feature lists of each candidate unit. Here, it will be about having one that is future-proof especially when you use computer equipment that has this connection type.

What to look for

Make sure the monitor you are after has the display size, aspect ration and other abilities that suit your key usage scenario. For example, gamers should look for monitors that work tightly with their preferred high-performance graphics cards.

Look for a USB-C monitor that has a USB hub with plenty of USB 3.0 downstream connections. Another USB-C downstream connection can be an asset worth considering. But at least one of the USB sockets must be easily discoverable and accessible from your operating position.

The USB-C monitor should have be able to work as a power source compliant to the USB Power Delivery specification with an output of 45 watts or more. This will mean that you don’t need to use your laptop computer’s battery charger to run your laptop at home or work.

Audio-equipped USB-C monitors must have an external line-level or headphone audio output so you can use them with your favourite audio devices.

If the monitor has an integrated Webcam, it may be an asset for your privacy to have a user-operated shutter across the camera lens or the Webcam to be of a “pop-up” design that is concealed when not in use.

Conclusion

Over this year, the appearance of display monitors with a USB-C connection will become more common as the number of laptop and small-profile computers kitted out with this or the Thunderbolt 3 connection increases.

Send to Kindle

What will passwordless authentication be about?

Facebook login page

You soon may not need to remember those passwords to log in to the likes of Facebook

The traditional password that you use to authenticate with an online service is in the throes of losing this role.

This is coming about due to a lot of security risks associated with server-based passwords. One of these is for us to use the same password across many online services, leading towards credential reuse and “stuffing” attacks involving “known” username/password or email/password pairs. As well, the password is also subject to brute-force attacks including dictionary attacks where multiple passwords are tried against the same account. It also includes phishing and social-engineering attacks where end-users are tricked in to supplying their passwords to miscreants, something I had to rectify when an email account belonging to a friend of mine fell victim to phishing. This is facilitated by users creating passwords based on personal facts that work as aide-memoires. Passwords can also be stolen through the use of keyloggers or compromised network setups.

Managing multiple passwords can become a very user-unfriendly experience with people ending up using password-vault software or recording their passwords on a paper ore electronic document. As well, some applications can make password entry very difficult. Examples of these include connected-TV or games-console applications where you pick each character out using your remote control’s or game controller’s D-pad to enter the password.

You will be able to set your computer up to log you in to your online services with a PIN, fingerprint or other method

The new direction is to implement passwordless authentication where a client device or another device performs the authentication role itself and sends an encrypted token to the server. This token is then used to grant access to the account or facilitate the transaction.

It may be similar to multifactor authentication where you do something like enable a mobile authenticator app after you key in your online service’s password. But it also is very similar to how a single-sign-on or social-sign-on arrangement works with the emphasis on an authenticated-session token rather than your username and password as credentials.

The PIN will be authenticated locally nd used to enable the creation of a session token for your online service

There will be two key approaches which are centred around the exchange of an asymmetric key pair between the client and server devices.

The first of these will be the primary client device like your laptop computer or a smartphone that you are using the online service on. Or it can be a secondary client device like your smartphone that is holding the private key. You authenticate with that device using a device-local PIN or password or a biometric factor like your fingerprint or face.

Android security menu

The same holds true for your Android or other smartphone

The second will involve the use of a hardware token like a FIDO2-compliant USB or Bluetooth access key or an NFC-compliant smart card. Here, you activate this key to pass on the credentials including the private key to the client computer for your online session.

It is being facilitated through the use of FIDO2, WebAuthN and CTAP standards that allow compliant Web browsers and online services to implement advanced authentication methods. At the moment, Windows 10 is facilitating this kind of login through the use of the Windows Hello user-authentication functionality, but Android is in the process of implementing it in the mobile context.

There is effectively the use of a form of multifactor authentication to enable the cryptographic key pair between the client and server devices. This is based around the device you are using and the fact you are there to log in.

HP Elitebook 2560p business notebook fingerprint reader

The fingerprint reader on this HP Elitebook and similar laptops will become more important here

If the authentication is to take place on the primary client device like a laptop or smartphone, the device’s secure element like a TPM module in a laptop or the SIM card in a smartphone would be involved in creating the private key. The user would enter the device-local PIN or use the fingerprint reader to enable this key which creates the necessary session token peculiar to that device.

On the other hand, if it is to take place on a secondary device like a smartphone, the authentication and session-token generation occurs on that device. This is typically with the user notified to continue the authentication on the secondary device, which continues the workflow on its user interface. Typically this will use a Bluetooth link with the primary device or a synchronous Internet link with the online service.

The online service has no knowledge of these device-local authentication factors, which makes them less likely to be compromised. For most users, this could be the same PIN or biometric factor used to unlock the device when they switch it on and they could use the same PIN across multiple devices like their smartphone or laptop. But the physical device in combination with the PIN, fingerprint or facial recognition of that user would be both the factors required to enable that device’s keypair and create the session token to validate the session.

A hardware token can be in the form of a USB or Bluetooth security key or a NFC smart card. But this device manages the authentication routines and has private keys kept in its secure storage.

There will be the emphasis around multiple trusted devices for each service account as well as the same trusted device supporting multiple services. Some devices like hardware tokens will have the ability to be “roaming” devices in order to do things like enabling a new device to have access to your online services or allow ad-hoc use of your services on shared equipment such as the public-use computers installed at your local library. They will also work as a complementary path of verification if your client device such as a desktop PC doesn’t have all the authentication functionality.

Similarly, when you create a new account with an online service, you will be given the option to “bind” your account with your computer or smartphone. Those of us who run online services that implement legacy-based sign-in but are enabled for passwordless operation will have the option in the account-management dashboard to bind the account with whatever we use to authenticate it with and have it as a “preferred” authentication path.

Some of the passwordless authentication setups will allow use with older operating systems and browsers not supporting the new authentication standards by using time-limited or one-use passwords created by the authentication setup.

Questions that will arise regarding the new passwordless Web direction is how email and similar client-server setups that implement native clients will authenticate their sessions. Here, they may have to evolve towards having the various protocols that they work with move towards key-pair-driven session tokens associated with the particular service accounts and client devices.

There will also be the issue of implementing this technology in to dedicated-purpose devices, whether as a server or client device. Here, it is about securing access to the management dashboards that these devices offer, which has become a strong security issue thanks to attacks on routers and similar devices.

IT WILL TAKE TIME TO EVOLVE TO PASSWORDLESS

Send to Kindle