Category: Current and Future Trends

Video conferencing shows up a use in the classroom due to necessity

Article

Creative Labs LiveCam Connect HD Webcam

The Webcam attached to a laptop is used by a teacher to teach classes at a school from home while under quarantine

High School Teacher Holds Class Via Videochat While in Coronavirus Quarantine | Gizmodo

Coronavirus precaution sees Darwin teacher adopt a different kind of screening while teaching from home | ABC News

My Comments

Over the many years, technology has played an important role within education in many forms. It is more underscored through its role within distance education, which tends to be focused towards people living in rural or remote communities.

Dell XPS 13 8th Generation Ultrabook at QT Melbourne rooftop bar

.. along with an ordinary laptop

Radio earned its keep in Australia since 1944 with outback communities benefiting from the School Of The Air with classes held over distance using two-way radio technology. That technology, driven by pedal-operated two-way radio setups, was being used as a way for the cattle stations and other communities to keep in touch with each other and the outside world.

The UK integrated radio and TV as key course-delivery technologies to drive their Open University concept (Wikipedia article) for post-secondary education which began in 1967. Other technologies like computers, home video recorders and DVDs earnt their keep in this role as they came about.

Similarly, the outbreak of World War II created the concept of an “over-the-air” class for kindergarten-aged children due to the wartime closure of Australian kindergartens. This radio show, known as “Kindergarten Of The Air” ended up being broadcast nationally on ABC radio and continued until the 1980s. No doubt a significant number of Australian Baby Boomers would have remembered hearing this program on the radio when they were little kids. Some overseas stations even ended up syndicating this radio-based kindergarten class with others following the ABC’s example by running their own pre-school educational radio programs on their networks.

BBC Model B microcomputer By Soupmeister (Acorn BBC Model B) [CC BY-SA 2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

BBC Model B personal computer – the core of an original TV-based computer-education project that took place in the UK during the early 1980s

What Australia’s ABC had done with “Kindergarten Of The Air” was effectively contributing the idea of educational content delivery via broadcast radio to the world of broadcasting. Here it opened up the doors for the likes of Play School and Sesame Street. schools-focused radio and TV broadcasts that tie in with lesson plans including the BBC’s computer-education project centred around the BBC Model B computer  and the newer micro.bit board computer , the aforementioned Open University in the UK and now in Australia, amongst other things.

Many education-technology futurists over the many years envisioned the use of video conferencing and allied technologies as a regular part of the classroom. Most of us would see this peculiarly for the distance-learning sphere where one is studying at home receiving their tuition by post, email or similar transmission methods; perhaps with a smattering of other use cases like exchange students communicating with the school community they are normally based at.

But the recent coronavirus outbreak had iegitimised the use of video conferencing by allowing a teacher to continue giving mathematics classes while he was in self-imposed quarantine at home thanks to his visit to the affected parts of China for the Chinese New Year. Here, he was able to use common personal computing and Internet technology along with the school’s AV and IT setup to teach maths “over the wire” and assure his students learning continuity. Other teachers in the same predicament whether in China or Australia were using similar technology to work around any necessary quarantine requirements thanks to this coronavirus plague.

Just like with the ABC’s “Kindergarten Of The Air” using radio as a way to deliver pedagogical content during a wartime scenario where educational facilities weren’t available, the recent videoconferencing application by the schoolteacher to continue teaching maths to his students while under quarantine due to the disease outbreak is underscoring the use of technology as a workaround to deliver education when disaster strikes.

With videoconferencing and allied technologies, it could be about augmenting learning in a range of ways such as cost-effective access to specialist lessons.  It can also benefit teachers and students travelling away from their home place of education by allowing them to share knowledge from that trip with students back at that place of education.

Who knows what this could lead to in the sphere of education?

Consumer Electronics and Personal IT trends for 2020

Every year in January, the Consumer Electronics Show is run in Las Vegas, USA and this show does give a glimpse in to what trends will affect consumer electronics and personal IT. In most cases, these are products that will be on the marketplace this year or products that are a proof-of-concept or prototype that demonstrates an upcoming technology.

The problem is that this exhibition focuses on what will be available in North America but a lot of the technology will be relevant to the rest of the world. In a lot of these cases, localised variants will appear at various trade shows or PR events that occur in Europe or other areas.

As well, the trade-show circuit will attract service-level information-technology companies who don’t need to make hardware or have a hardware platform, or be a content creator. Here, it will be simply about the provision of IT-based services as part of a ubiquitous computing environment including the concept of experience-driven computing.

Connectivity Technology

Over the past year, the two main technologies that were called out regarding online connectivity or the home network were 5G mobile broadband and Wi-Fi 6 (802.11ax) wireless local networks. This is about very-high-bandwidth wireless data communications whether out and about or within your home or other small network.

As various radiocommunications regulatory agencies around the world “open up” the 6GHz waveband for Wi-Fi network use with the USA’s Federal Communications Commission the first to do so, the Wi-Fi Alliance have created a specification identifier for network equipment working this waveband. Here, it is known as Wi-Fi 6E as a way to identify the fact that the device can work the 6GHz waveband, and is in contrast to Wi-Fi 6 (802.11ax) devices that only work the 2.4GHz and 5GHz wavebands.

D-Link DIR-X5460 Wi-Fi 6 router press picture courtesy of D-Link USA

One of D-Link’s Wi-Fi 6 routers that also supports Wi-Fi EasyMesh – setting the standard for home network technology this year

Both these technologies became real with an increase in client devices or small-network infrastructure hardware supporting at least one of these technologies. This included laptop computers and smartphones having this kind of functionality baked in to them as well as more home-network routers, distributed-WI-Fi systems and range extenders being equipped with Wi-Fi 6. There is even the fact that some of the network-infrastructure vendors like Linksys and NETGEAR are offering routers that combine both technologies – 5G mobile broadband as a WAN (Internet) connection and Wi-Fi 6 as a LAN (local-network) connection.

A step in the right direction for distributed-Wi-Fi networks was to see major home-network brands offer routers and/or range extenders compliant to the WI-Fi EasyMesh standard. This allows you to create a distributed Wi-Fi network with equipment from different vendors, opening up the market for equipment from a diverse range of vendors including telcos and ISPs along with a pathway towards innovation in this space.

Bluetooth hasn’t been forgotten about here with the new Bluetooth audio specification being “set in stone” and premiered at CES 2020. This specification, known as Bluetooth LE Audio, works on the Bluetooth Low Energy profile and supports the LC3 (Low Complexity Communications Codec) audio codec that packages the equivalent of an SBC audio stream used by Bluetooth audio setups in half the bandwidth. This allows for longer battery runtimes which will also lead to smaller form-factors for audio devices due to the reduced need for a larger battery.

It also supports multiple independent and synchronous audio streams to be sent from one source device to many sink devices. This strengthens use cases like hearing aids that work with Bluetooth and may supersede the inductive loop as a technology for assisted-listening setups. As well, the multiple-streaming technology will be a boon to applications like multichannel Bluetooth speaker setups; or Bluetooth headphones as part of assistive audio, multilingual soundtrack options or semi-private listening arrangements.

The Bluetooth LE Audio technology is to be released in the first half of 2020 with compatible devices being on the market by 2021. But there will also be the issue of having device support for this technology being baked in to operating systems as a class driver.

Dell XPS 13 2-in-1 Ultrabook - USB-C power

USB 4 will be the next stage for hardware connectivity and will include Thunderbolt 3

As for wired peripheral interconnection, USB 4.0 will be surfacing as a high-speed connection standard for computers and mobile devices. There will be compatibility with Thunderbolt 3 due to Intel signing over the intellectual property rights for that protocol to the USB Implementers Forum. But this may be used by some computer vendors as a product differentiator although the market will prefer that USB 4 computers and peripherals work with those that use Thunderbolt 3. Let’s not forget that the physical connector for USB 4 will be the Type C connection.

Let’s not forget that newer Android phones will use USB Power Delivery as the official standard for transferring power from chargers or powerbanks to themselves. This is about avoiding the use of proprietary fast-charge technologies and using something that is defined by the industry for this purpose.

Computer trends

Lenovo IdeaPad Creator 5 15" clamshell laptop press picture courtesy of Lenovo USA

Lenovo IdeaPad Creator 5 15″ clamshell prosumer / content-creator laptop

At the moment, as I outlined in the article about “prosumer” content creators being identified by computer manufacturers as a significant market segment, this year is being seen as a time to launch performance-optimised computers targeted at this user group. These units will be optimised to work with popular content-creation software in a sure-fire manner.

Let’s not forget that Lenovo is tying up with NEC in order to create the LAVIE computer brand that targets mobile professionals. This was after Toshiba spun off their laptop-computer division as “Dynabook” brand then sold it to Sharp; and Sony sold off their VAIO computer brand with it existing as a premium computer brand. But is this symbolic of what the Japanese computer names are heading towards where they focus on creating premium business laptops and tablets.

As well as offering their newer-generation CPUs, Intel has demonstrated that they can offer their own high-performance personal-computer display infrastructure. They even demonstrated a graphics card that use Intel-designed discrete GPU technology. This leads towards them competing with NVIDIA and AMD when it comes to discrete graphics-infrastructure technology and could lead to a three-way race in this field.

It is alongside AMD placing a lot of effort on their Ryzen CPUs which are leading towards them in a position to effectively compete on a par with Intel’s Core CPUs. As well, Intel and AMD could head towards creating performance computing setups that are based around their CPUs and discrete graphics infrastructure technology, including setups that have the CPU and discrete GPU on the same silicon.

There is also an increase in the number of “Always Connected PCs” that run with ARM RISC microarchitecture rather than the traditional Intel i86/i64 CISC microarchitecture. They will be about operating on batteries for a very long time and have 4G, if not 5G mobile-broadband modems with classic SIM or eSIM service authentication. Most likely I would see them as being the direction for portable mainstream business computing.

Dell G5 15 Special Edition budget gaming notebook press picture courtesy of Dell USA

Dell G5 15 Special Edition budget gaming laptop with AMD Ryzen and Radeon silicon

For gaming, Dell has premiered a budget gaming-grade laptop that uses an AMD Ryzen CPU and an AMD Radeon graphics processor but is styled like their other “G Series” gaming laptops. As well, Lenovo took an interesting step with one of their gaming laptops by using Intel integrated graphics processors for its graphics infrastructure while equipping it with a Thunderbolt 3 port. Here, the user is to buy an external graphics module, typically the Lenovo BoostStation card-cage unit which is their first product of its kind that they released, to have the machine perform at its best. What this is about is a trend towards creating an entry-level performance laptop product range, very similar to buying the increased-performance “GT” variant of a popular family car model.

Lenovo ThinkPad X1 FOLD prototype folding-display computer press picture courtesy of Intel USA

Co-engineered by Intel and Lenovo, ThinkPad X1 FOLD is a foldable-screen device built on the Intel Core processor with Intel Hybrid Technology (code-named “Lakefield”). (Credit: Lenovo) – an example of what folding computers are about

 

Another trend that is being shown frequently is multiple-screen or folding-screen portable computers. This is being promoted by Intel and Microsoft in the context of Windows 10X and newer Intel chipsets. It is being driven by the multiple-screen or folding-screen smartphone that Samsung and others are on the verge of releasing as finished products. But this technology will have a limited appeal towards early adopters until it is seen as legitimate by the general user base.

As far as small-form-factor desktop computers are concerned, Intel is working towards a modular “next unit of computing” platform which has the whole computer system on a card the same size as a traditional PCI expansion card. This platform, known as Ghost Canyon uses the “Compute Element” which is the user-swappable card, is intended to bring hack the joys of us upgrading a computer’s performance by ourselves even if we go for a smaller computer platform.

Connected-TV technology

This year has heralded interest in 8K UHDTV which has effectively twice the resolution of 4K UHDTV. As well, the 8K Association has been formed in order to set standards for domestic 8K UHDTV applications and promote this technology.

It is in conjunction with ATSC 3.0, also known as NextGenTV, being premiered at CES 2020 as a new direction for free-to-air TV in the USA. It us being valued thanks to people moving away from cable and satellite pay-TV services towards Netflix and other video-on-demand services augmented by free-to-air TV. Here, it will allow Americans to benefit from 4K UHDTV and Dolby Atmos technology via the TV antenna. Like with DVB and HBBTV-based standards used in Europe and Oceania, this technology combines the over-the-air signal with broadband Internet data to achieve advanced TV experiences.

There is also increased robustness as far as antenna-based reception is concerned which may allow for use of indoor antennas without their associated problems. As well, mobile users will benefit from this newer technology for on-the-road viewing. But there will also be an emphasis towards broadcast-LAN operation with one tuner offering a broadcast signal amongst multiple TVs. Users can upgrade their existing televisions to this technology by connecting an ATSC 3.0 set-top box to their TV as they see fit, but there will be some TVs, most likely “living-room” models from a few manufacturers, that will support this standard.

The 4K AMOLED screen is entering the “Goldilocks” territory when it comes to product price and screen size – not too big and expensive, not too small or cheap, but just right. It is seen by the trade as a “mid-market” territory but, for a TV, it is about something that appeals to more people without being too ostentatious or requiring one to pay a price’s ransom.

The advantage it has over the LCD screen that rules this market territory is to have increased contrast and richer colours, something that those of you who have a smartphone or tablet with an OLED display benefit from. As well, it is a technology that legitimises the high-dynamic-range and wide-colour-gamut video reproduction technology being pushed by the film and video industries.

Here, Sony released the first 48” 4K AMOLED screen that would be able to fit most viewing areas. This includes apartments and small houses as well as use in bedrooms, or secondary lounge areas including living rooms which aren’t frequently used for watching TV. As well, some AMOLED TV manufacturers are pitching sets that cost under US$1000. Here, this price point puts the AMOLED TV within reach of most middle-class families who are considering upgrading to this kind of technology without paying a price that sounds too vulgar.

Another trend affecting TVs is support for variable high refresh rates. Here, it appeals towards games consoles being able to work with game-optimised variable-refresh-rate monitors typically partnered with PC-based desktop gaming rigs, offering the same kind of display refresh rate as the display card on a gaming-rig PC would offer. This is being factored in because the large-screen TV is being valued in the context of gaming, especially with one-machine multiple-player games or the excitement of playing a favourite game on that big screen.

As well, I see the Apple TV and Android TV platforms as dominant smart-TV / set-top-box platforms due to the existence of strong code bases, strong developer communities and a well-nurtured app store. Here, the Android platform will appeal to TV vendors who haven’t invested in a smart-TV platform along with some third-party set-top box vendors. But the Android TV platform as a set-top-box platform has to be disassociated from the so-called “fully-loaded” Android boxes that are sold online from China for access to pirated TV content.

This is being driven by an avalanche of video-on-demand services that will appear over this year. Some of these will be subscription-based and offer new original content produced by the service’s owner while others will use advertising, perhaps as part of a freemium arrangement, and work heavily on licensing deep back-catalogue material. There will also be an effort amongst the new video-on-demand providers to take an international approach, appearing in multiple markets around the world, most likely with the goal of licensing content in all international markets concurrently.

It will even lead to each content-production name having its own video-on-demand service that primarily hosts content from its stable. But the question that will come about is how many video-on-demand subscriptions will we be having to budget for and maintain if we want content that reflects our choices.

Audio Technology

The DAB+ digital broadcast radio platform is increasing its footprint within Europe and across some parts of Africa and Asia. It includes some European countries like Norway and Switzerland moving their broadcast infrastructure away from AM and FM radio to this technology.

Pure Sensia 200D Connect Internet radio

Pure Sensia 200D Connect Internet radio – a representative of the current trend towards the “hybrid radio” concept

Here, it would be about an increased variety of devices that have broadcast-radio reception functionality based on this platform, including those that have Bluetooth and/or Internet-radio functionality. As well, more vehicle builders are being encouraged to supply DAB+ radios as factory-standard in all of their vehicles. Let’s not forget that value-priced DAB+ and Internet radio equipment will be equipped with a colour display that shows things like station branding or album cover-art while you listen to that station.

RadioDNS will be something that facilitates a hybrid broadcast-broadband approach to broadcast radio. This will include the ability to switch between broadcast-radio channels and an Internet radio stream for the same radio station or allow for richer supporting content to appear on the set’s display. It can also be about a “single-dial” approach to finding stations on broadcast and Internet bands. But RadioDNS has been given more “clout” in to the USA due to it being able to work with AM, FM or HD Radio (IBOC digital radio on AM and FM) which is used there.

Sonos’s partnership with IKEA, the furniture store who sells furniture that you assemble yourself with an Allen key, is demonstrating that a high-end multiroom-audio platform can be partnered with a commodity retail brand. What it could lead to is an incentive to build these kind of platforms around a mixture of premium, value and budget units, allowing for things like a low-risk “foot-in-the-door” approach for people starting out on that platform or people who have the premium equipment building out their system with cheaper equipment in secondary listening areas. It could even put pressure on the industry to adopt a common standard for multiroom-audio setups.

The streaming audio-on-demand scene is moving in a manner as to shore itself up against Spotify. Initially this is about offering either an advertising-supported free limited-service tier as what Amazon and Google are doing, or to offer a premium service tier with a focus on CD-quality or master-quality sound which is what Amazon is doing. But it could easily go beyond the “three-tier service” such as improved playlists, underrepresented content, support for standalone audio equipment, and business music services. As well, your ISP or telco could be providing access to a streaming-audio service as part of their service package or you buy a piece of network-enabled audio equipment and benefit from reduced subscription rates for an online music service.

The headphone scene is setting some strong contenders when it comes to excellent value-for-money for noise-cancelling Bluetooth headsets.

Bose initiated this battle with the QuietComfort 35 II headphones with the technological press’s reviewers seeing them as a standard setter for this class of headset. Then Sony introduced the WH1000XM3 headphones and these were seen on a par with the Bose cans but at a more affordable price with some press using terms like “Bose-killers” to describe them. Bang & Olufsen came in to the party and offered a premium noise-cancelling Bluetooth headset known as the Beoplay H9. But lately Bose also answered Sony by offering the Noise Cancelling 700 headset that effectively did that job in a minimalist form. This is while Sony are intending to launch the WH1000XM4 this year to raise the bar against Bose and their current product.

As far as “true wireless” active-noise-cancelling earbuds are concerned, Apple with their AirPods Pro and Sony with their WF1000XM3 have established themselves at the top of the pack for excellence. What I see of this is someone else could answer them to achieve that same level of excellence especially at a value price. This product class is also likely to benefit from the Bluetooth LC Audio specification due to the requirement for a small battery in each earbud and the small size of each earbud.

What Apple, Bose, Sony and B&O are highlighting is that they could easily compete with each other to achieve excellent products when it comes to headphones you use with your laptop, smartphone or tablet. It could even be a chance for other companies to join in and raise the bar for premium everyday-use headset design, including the idea of having audiophile headphone qualities in this class of headset.

Voice assistant platforms and ambient computing

Amazon Alexa and Google Assistant will still bring forth newer devices, whether in the form of speakers or displays. But Amazon Alexa and Microsoft Cortana will be part of the Open Voice Initiative allowing the same physical hardware to handle multiple voice assistant platforms.

A question that will arise through this year is whether there will be a strong direction towards having these devices work as a fixed audio or video telephony endpoint. This is whether the device works in a similar fashion to the classic landline telephone service with its own number; or as an extension to a smartphone that is part of a mobile telecommunications service.

The voice-assistant platforms will end up becoming part of an ambient computing trend that is underscored by facilitators like Internet of Things and distributed computing. Here, it is about computing that blends in with your lifestyle rather than being a separate activity.

As far as the Internet Of Things is concerned, the Connected Home over IP protocol was set in stone. This effort, facilitated by Amazon, Google and Apple with the oversight of the Zigbee Alliance, is about an IP-driven Internet-Of-Things data transport architecture. The idea is to do away with protocol gateways which were being used with various smart-home applications but the manufacturers were goading consumers to use their own protocol gateways with their devices rather than a third-party solution. There will be an emphasis on a safe secure interoperable Internet-of-Things network.

Data security and equipment maintenance in our personal and business lives

The Social Web will be considered a very important part of our lives with us primarily benefiting from it on tablets, smartphones or highly-portable laptops.

But it will still be a key disinformation vector. One of the new methods expected to be exploited this year is the creation of deepfakes. These are audio and video items created using artificial intelligence to make it as though a person said something when they didn’t. There will even be the ability to make the voice or face of a deepfaked person appear older or younger than they were when they were recorded, while make the voice or face appear as fluid as that of a real person.

Here, it will be used as a cyber weapon to create political, social and business instability by these representing our leaders whether they be in government, business or other circles. The deepfake will also be of value as a phishing tool in order to make the threat or plea appear to be more authentic to the victim.

As well, ransomware will begin to take on a network-wide dimension and affect business and service availability. Sensitive data, whether of a personal or business nature, will end up becoming the bargaining chip for ransomware hackers. This is in contrast to access to a computer user’s data resources which was often the case with ransomware.

The Internet Of Things will also be considered a continual security risk especially due to poor software and firmware quality control. It will lead to a conversation regarding the maintenance of our online devices through their lifecycle, including making sure they are running software that is stable and secure.

Then there is the “end-of-support” issue where a manufacturer ceases to show interest on older online devices that are currently in use. That is a question that is surfacing when one invests a significant amount of money in to the devices and people don’t want to throw out older equipment just because the manufacturer doesn’t want to support it anymore. It also goes against the grain of the post-Global-Financial-Crisis attitude most of us have adopted where we don’t want to support a throwaway society but want to see what we buy exist for the long haul.

The Sonos debacle raised the issue about what level of functionality the user should expect from their device along with how platform-based setups consisting of legacy and newer devices should behave. It also raised the issue of keeping the device’s software stable and secure.

Conclusion

This year will be considered a very interesting time for our online life as we see improvements to existing technologies along with newer conversations about how system-based technologies continue to evolve with a secure stable mindset.

The Sonos debacle has raised questions about our personal tech’s life cycle

Article Sonos multiroom system press picture courtesy of Sonos

Sonos extend support for legacy products after backlash | PC World

From the horse’s mouth

Sonos

A letter from our CEO (Blog Post)

My Comments

Recently, Sonos sent some shivers around the Internet regarding their multiroom audio products’ life cycle.

This started with them installing a “Recycle” mode in their speakers and other devices, which would effectively take the devices out of action, with it being tied in to a rebate on new devices if the old equipment was returned to them for e-waste recycling.

It worried some social media users because they want to keep the extant equipment that functioned properly going for as long as possible, including “pushing down” older equipment to secondary areas, selling it in to the second-hand market and giving to friends, relatives and community organisations while they upgrade to newer Sonos gear. Here, they really wanted the Sonos device to be detached from the user’s Sonos account and prepared as if ready to set up within a new system for whenever it is given away or sold.

Then this past week, Sonos raised the prospect that multiroom-audio equipment made prior to model-year 2015 won’t get software updates after May 2020. This wasn’t conveyed properly in that the affected equipment won’t benefit from feature updates but will benefit from bug-fixes, security updates and anything else to do with software quality.

There was also issues raised about a Sonos-based multiroom-audio system that consists of the legacy equipment as well as newer equipment, which is a result of someone effectively “building-out” their system by purchasing newer gear. An example I referred to in an article about the IKEA SYMFONISK speakers which work on the Sonos platform is to use the SYMFONISK speakers as a low-cost way of adding extra speakers for another room like the kitchen while you maintain the Sonos speakers in the areas that matter.

The concern that was raised is the availability of software-quality updates including incremental support for new or revised API “hooks” offered by online-audio services; along with the ability for the devices to stay functioning as expected.

Then there was the issue of logically segmenting a Sonos multiroom audio system so that newer devices gain newer functionality available to them while older devices keep the status quo. At the moment, a Sonos multiroom system which works across the same logical network is divided in to logical rooms to allow speakers in one room to play the same source at the same volume level. Here, it may be about determining the upgradeability based on the existence of newer speakers in a room, where older speakers in the same logical room work as “slave” speakers to the newer speaker.

What is being called out here is how long a manufacturer should keep new software available for the equipment and what kind of updates should be available for equipment that is long in the tooth. It focuses especially on keeping the older devices function at an expected level while running secure bug-free firmware. Let’s not forget how older and newer devices can coexist in a system of devices based on a particular platform while providing consistent functionality.

This is more so where the equipment can enjoy a long service life, something that is expected of kit that costs a significant amount of money. It applies also to the fact that people build out these systems to suit their ever-changing needs.

Companies that observe the Sonos debacle could look at the mistakes Sonos made in properly conveying the issue of feature-update cessation for older products to their customer base. As well, they would have to look at how Sonos is tackling the issue of maintaining software quality, stability and security in their devices’ firmware along with catering to the reality of platform-based systems that have a mix of older and newer devices.

Telstra is the first telco to supply home-network hardware that supports Wi-Fi EasyMesh

Telstra Smarty Modem Generation 2 modem router press picture courtesy of Telstra

Telstra Smart Modem Generation 2 – the first carrier-supplied modem router to be certified as compatible with Wi-Fi EasyMesh

Article – From the horse’s mouth

Telstra

Telstra offers world-first Wi-Fi EasyMesh™ standard in new Smart Wi-Fi Booster™ 2.0 (Press Release)

Previous HomeNetworking01.info coverage on Wi-Fi EasyMesh

Wi-Fi defines a new standard for distributed wireless netowrks

My Comments

Typically Australian telcos and ISPs who supply a modem-router to their customers as part of providing Internet service are associated with supplying substandard hardware that doesn’t honour current home-network expectations.

This time, Telstra has broken the mould with their Smart Modem Generation 2 modem router and the Smart Booster Generation 2 range extender. Here, these devices support Wi-Fi EasyMesh so they can work with other routers or range extenders that are compliant to this standard.

At the moment, the Smart Modem can handle 4 of the range extenders and Telstra’s marketing collateral specifies that these devices can only work with each other. This is most likely due to the inexistence of routers or range extenders from other suppliers that work to this standard when the Smart Modem Generation 2 and Smart Booster Generation 2 were released.

The media release was talking of 450,000 Generation 2 Smart Modems in service around Australia, most likely due to NBN providing an excuse to upgrade one’s modem-router. As I said in my post about this standard, it is independent of the hardware base that the Wi-Fi infrastructure devices have thus allowing an extant device to benefit from this technology through a firmware upgrade.

Here, Telstra has taken the step of providing the functionality to the existing Generation 2 Smart Modem fleet by offering it as part of a firmware upgrade as what should happen with carrier-supplied network equipment. This will be done in an automatic manner on an overnight basis or when you first connect your modem to the Internet service.

This is showing that a telco or ISP doesn’t need to reinvent the wheel when offering a distributed-Wi-Fi setup. Here, they can have their carrier-supplied Wi-Fi EasyMesh-compliant modem router work with third-party EasyMesh-compliant repeaters that are suited for the job.,

The prosumer is now being considered as a distinct personal-IT user class

Articles – From the horse’s mouth

Lenovo IdeaPad Creator 5 15" clamshell laptop press picture courtesy of Lenovo USA

Lenovo IdeaPad Creator 5 15″ clamshell prosumer / content-creator laptop

Lenovo

Lenovo’s Five Must-Have Devices for the Digital Creator (Press Release)

My Comments

At the Consumer Electronics Show 2020, Lenovo launched their Creator series of desktop and laptop computers focused towards the “prosumer” user class. But what is this user class?

What is a prosumer?

The word “prosumer” is a portmanteau of the words “producer” and “consumer” in which the user produces something as well as consuming other things. For example, the person may end up taking a lot of photos not just for their personal family album but to create things like exhibitions or slide shows or illustrate books.

In the context of product positioning, it is a portmanteau of “professional” and “consumer” where products of that class stand between professional-class products pitched to business users who use it as part of their trade; and consumer-class products pitched to ordinary householders. Those products were effectively pitched at “serious users” who wanted what professional users were benefiting from without the huge price tag associated with that product class.

Here, the prosumer is a technology user who primarily create content but aren’t doing it as part of a regular day job. Typically they would do this as a personal hobby or as an effort to support a non-profit organisation. In some ways, it may also augment another hobby or other effort like making music or building a social-media presence.

They could also be making money creating content but on a “job-by-job” basis for various end-users but not have the volume of valuable work to be considered a professional content creator. An example of this may be photographers, videographers or entertainers who gain most of their work during particular seasons or a budding film producer who is building up their work until they gain a reputation.

The last few decades of the 20th century saw companies involved in the consumer photography and AV industries research technology and create affordable products that satisfy the needs of this kind of user. Here, it is about turning out high-quality work that can be presented to people, especially paying customers.

This class of relatively-affordable “prosumer” equipment led to an easier entry path for people wishing to make money out of this kind of work like the photographers or videographers who you hire to photograph or film that special event; or project studios who prepare demo tapes for various live acts.

As well, it opened up a path for small businesses and community organisations to turn out high-quality creative material that can further their efforts with such things as a church having sermons available for the faithful to hear at a later date or a small business creating their own long-form advertising videos.

How is the computing world answering the prosumer user class?

But the computing market caught up slowly with this user class’s needs initially through the Apple Macintosh and laser printers facilitating desktop publishing in the late 1980s. Apple then took this further with optimising the Macintosh platform for multimedia production and acquired a reputation in this field across the prosumer and professional space.

Toshiba Satellite P750 multimedia laptop

Toshiba Satellite P750 multimedia laptop

But prosumer users found that companies who manufactured Windows-based computers didn’t really cater to their needs. The initial effort was to create multimedia-grade computers with advanced graphics and sound subsystems. I have reviewed a few examples of this computer class with the Toshiba Satellite P750 being one of them. But this product class ended up being focused towards high-stakes gaming where the goal is towards responsiveness especially in a first-person-shooter game.

A few manufacturers like Sony made “flash-in-the-pan” efforts with computers that offered features and specifications that appealed to prosumer-class users, such as implementing OLED displays with very-high colour gamut. But these models didn’t stay on the market for a long time.

Nowadays, the prosumer would end up using a gaming-grade computer that may be seen as underpowered and unreliable for content-creation, audio-production or similar software. This is if they wanted the kind of performance necessary to edit or “finish” their creative work. If the gaming rig in question is a traditional desktop unit that can have its graphics card replaced, the user may substitute the gaming-optimised display card with a workstation-class or content-creation-class display card. Similarly, if the gaming rig is a laptop, all-in-one or low-profile desktop unit with a Thunderbolt 3 connection, they would use a “card-cage” external graphics module equipped with a workstation-class display card for this purpose.

Or, if they are in the market for a traditional three-piece desktop computer based around a system unit of a standard form-factor, they would go to an independent computer retailer. Here, they would specify a custom-built store-brand computer that works with this software in an optimum manner.

On the other hand, they would be suggested to use a “certified workstation” computer that was proven by the software vendors to work with this kind of software but these would be considered very expensive and have too many features like managed-IT functionality that they wouldn’t need. In some cases, it would lead towards buying an entry-level model in a manufacturer’s workstation-class product range.

Lenovo Yoga Creator 7 15" prosumer convertible laptop press picture courtesy of Lenovo USA

Lenovo’s entry in to the prosumer content-creator class of convertible laptops in the form of the Yoga Creator 7 15″ 2-in-1.

Lenovo’s initial Creator range of prosumer-class computing products ticks the necessary boxes. Here, they are based on the manufacturer’s consumer-class product range but have the necessary configuration that is proven by the software vendors to work with their modestly-priced content-creation software. They are offering two portable computers (IdeaPad Creator 5 clamshell and Yoga Creator 7) and a traditional-style desktop tower computer (IdeaCentre Creator 5) that is optimised for this kind of work.

This could lead on to other computer manufacturers who provide “certified-workstation-class” performance computers and peripherals pitched towards these “prosumer” users. Here, they would be based on the manufacturer’s consumer or small-business product lines but have the necessary hardware specification to work with the affordable content-creation software.

One of the key factors in the design of these computers is that the graphics infrastructure would be optimised to work at standard refresh rates rather than the high refresh rates associated with gaming and not be suited to the kind of image-painting associated with fast-paced games. In a lot of cases, the graphics processor will be roped in as an auxiliary processor to facilitate rendering or transcoding.

Could the “prosumer-class” computer appeal to all users?

I would see these computers appeal to people who frequently create content on their computers and they use or intend to use highly-capable image, video or audio editing software for this purpose. They can also earn their keep with people and organisations who use advanced audio and video playback setups such as computer-based DJ/karaoke setups with advanced playback effects or multiple video channels.

The computers can offer high-end gaming performance which can please those users who are wanting to play a video game for their rest and relaxation. But I wouldn’t necessarily expect them to satisfy an expectation of esports-class gaming.

I could also see these computers appeal to students who are studying multimedia production, architecture / engineering, statistics and the like and want a low-risk entry point when it comes to technology. It would work alongside the fact that the software vendors are offering reduced pricing on the software associated with these studies for students who are currently studying these courses. This is to cater for the fact that the student may be very fickle about their course and wouldn’t justify a full-bore workstation-class computer if they don’t see themselves completing the course and following that career path.

So it is becoming a situation where other user classes are being discovered when it comes to marketing personal and small-business information technology solutions. This time it is the creative types who create content on an ad-hoc basis rather than as a regular day job and they would want to have something that offers “certified-workstation” performance standards for the cost of a gaming rig.

Ultrasound being used as a way to measure user proximity to gadgets

Article – From the horse’s mouth

Google Nest

How ultrasound sensing makes Nest displays more accessible {The Keyword blog post)

My Comments

Google is implementing in their Nest Hub smart-display products an automatic display-optimisation technology that is based on technology that has been used for a very long time.

Ultrasonic technology has been used in various ways by nature and humans to measure distance. In nature, bats and dolphins which don’t have good vision use this approach to “see” their way. It is used extensively in military and civillian marine applications to see what is underneath a boat or around a submarine and is also used as a common medical-imaging technique.

As well, in the late 1970s, Polaroid implemented ultrasound as part of their active autofocus system, which ended up as a feature for their value-priced and premium instant-picture cameras. Here, this was used to measure the distance between the camera and the subject in order to adjust the lens for proper focus. There were limitations associated with the technology like not being able to work when you photograph through a window due to the ultrasonic waves not passing through the glass.

But Google has implemented this technology as a way to adjust the display on their Nest Hub smart displays for distant or close operation. The front of a Google Nest Hub has an ultrasonic sensor that works in a similar way to what was used in a Polaroid auto-focus instant-picture camera.

But rather than the Polaroid setup being about using the distance measurement from the ultrasonic sensor to adjust a camera’s lens, this application adjusts the display according to the user’s distance from the Nest Hub. If you are distant from the Nest Hub, you would see reduced information but the key details appear in a larger typeface. Then if you come closer to the Nest Hub, you would see more detail but at a smaller typeface.

Nest Hub Directions display press picture courtesy of Google

The Google blog article described this as being suitable for older users and those of us who have limited vision. The fact that you have the ability to see key information in a large typeface at a distance can make the Nest Hub accessible to this user group. But others can’t see deeper information unless they are very close to the device.

End-user privacy is still assured thanks to the use of a low-resolution distance-measurement technology whose results are kept within the device. As well, there is a menu option within the Google Home app’s Device Settings page to enable or disable the feature.

At the moment, it is initially being used for timer and current-time display as well as displaying travel time and traffic conditions for a planned journey that you set up with Google Maps. But Google and other software developers who develop for the Google Home ecosystem will add distance-sensitive display functionality to more applications like appointments and alerts.

Some people could see this technology not just for optimising the readout on a smart display but could even be used to ascertain whether people are actually using these devices. This could then be used for such functionality like energy-saving behaviour where the display turns off if no-one’s near it.

But what Google has to do is to license out this technology to allow others to implement it it to other fixed-display-based devices. Here, it could become of more use to many who don’t go for a Google Nest Hub.

but to add more functionality like appointments, alerts, reminders

Logitech improves on the XBox Adaptive Controller with a cost-effective control package

Articles

Logitech G Adaptive Gaming Kit press picture courtesy of Logitech International

Logitech G Adaptive Gaming Kit has what you need for the XBox Adaptive Controller

Logitech’s $100 kit for the Xbox Adaptive Controller makes accessible gaming cheaper | CNet

Microsoft went all in on accessible design. This is what happened afterwards | FastCompany

Previous HomeNetworking01.info coverage on the XBox Adaptive Controller

Microsoft runs a Super Bowl ad about inclusive gaming

From the horse’s mouth

Logitech

Adaptive Gaming Kit (Product Page, Press Release, Blog Post)

My Comments

Recently, Microsoft launched the XBox Adaptive Controller as an accessible games-console controller for people who face motion and dexterity-related disabilities. They even promoted it in a TV commercial ran during this year’s Super Bowl football match, which would have been considered to go against the grain for the usual sporting and video-game audiences.

This has been part of Microsoft’s step towards inclusionary gaming and I had written in the article about that controller not just to focus towards providing video gaming for disabled people. But I also called out the therapeautic value that some games can have for elderly people as well as disabled people with Microsoft offering a lower barrier to entry for independent game developers to create games that underscore that concept.

It has actually been underscored in a recent CNET video article about the XBox Adaptive Controller being used to help a US war veteran who lost some of his motion and dexterity in a motorcycle accident.

Click or tap here to play the video

But Logitech have taken this a step further by offering an accessory kit with all the necessary controls for US$99. This kit, known as the Adaptive Gaming Kit, makes it more affordable for these people so you can have an accessible gaming setup to suit your particular needs without having to choose and buy the necessary accessories. Here, it is important especially if a person’s needs will change over time and you don’t want to have to buy newer accessories to suit that need.

The package comes with rigid and flexible mats with Velcro anchor points for the various buttons and other controls. The flexible mats can even allow the controls to be anchored around a chair’s arms or other surfaces while the whole kit can allow for the equipment to be set up and packed up with minimal effort. The controls all have their own Velcro anchor points and screw holes for anchoring to other surfaces.

Logitech used their own intellectual capital in designing the kit while working with Microsoft to evolve the product. Here, they implemented their own mechanical-switch technology that is part of their high-end keyboards including their low-profile switches used in their low-profile keyboard range. The large buttons have stabilisers built in to them so you can press them from anywhere on the button’s surface. This leads to them not reinventing the wheel when it comes to the product’s design or manufacture because of the use of common technology.

What I have liked about Logitech’s Adaptive Gaming Kit is that the idea of accessible gaming comes at a price point that represents value for money. This is compared to various assistive-technology solutions which tend to require the user to pay a king’s ransom to acquire the necessary equipment. It has often led towards the government or charitable sector not getting their money’s worth out of their disabled-person support programs due to the high cost of the necessary technology.

Welcome to the new age of making assistive technology become more mainstream, not just for disabled users but for the realities associated with the ageing population such as ageing Baby Boomers and people living longer.

Germany to instigate the creation of a European public cloud service

Article

Map of Europe By User:mjchael by using preliminary work of maix¿? [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons

Europe to have one or more public cloud services that respect European sovereignty and values

Germany to Unveil European Cloud to Rival Amazon, Alibaba | ITPro Today

France, Germany want more homegrown clouds to pick from | ITNews (Premium)

My Comments

Germany is instigating a European-wide project to create a public cloud-computing service.  As well, France is registering intent in this same idea but of creating another of these services.

Both countries’ intention is to rival what USA and Asia are offering regarding public-cloud data-processing solutions. But, as I have said before, it is about having public data infrastructure that is sovereign to European laws and values. This also includes the management and dissemination of such data in a broad and secure manner.

Freebox Delta press photo courtesy of Iliad (Free.fr)

… which could also facilitate European software and data services like what is offered through the Freebox Delta

The issue of data sovereignty has become of concern in Europe due to the USA and China pushing legislation to enable their governments to gain access to data held by data service providers that are based in those countries. This is even if the data is held on behalf of a third-party company or hosted on servers that are installed in other countries. The situation has been underscored by a variety of geopolitical tensions involving especially those countries such as the recent USA-China trade spat.

It is also driven by some European countries being dissatisfied with Silicon Valley’s dominance in the world of “as-a-service” computing. This is more so with France where there are goals to detach from and tax “GAFA” (Google, Apple, Facebook and Amazon) due to their inordinate influence in consumer and business computing worlds.

or BMW’s voice-driven assistant for in-car infotainment

Let’s not forget that Qarnot in France has designed computers that put their waste heat to use for heating rooms or creating hot water in buildings. This will appeal to a widely-distributed data-processing setup that could be part of public cloud-computing efforts.

Questions that will crop up with the Brexit agenda when Europe establishes this public cloud service will include British data sovereignty if data is held on the European public cloud or whether Britain will have any access or input into this public cloud.

Airbus A380 superjumbo jet wet-leased by HiFly at Paris Air Show press picture courtesy of Airbus

… just like this Airbus A380 superjumbo jet shows European prowess in aerospace

Personally I could see this as facilitating the wider creation of online services by European companies especially with the view to respecting European personal and business values. It could encompass ideas like voice-driven assistant services, search engines, mapping and similar services for consumers or to encourage European IT development.

Could this effort that Germany and France put forward be the Airbus or Arianespace of public-cloud data services?

PAX 2019–Indie games gaining a strong appeal

Previous Coverage about the indie gaming segment

Untitled Goose Game on Alienware stand at PAX 2019

Untitled Goose Game ends up as one of the feature games to demo computer gaming hardware at PAX 2019

Alaskan fables now celebrated as video games

Two ways to put indie games on the map

Indie games like Untitled Goose Game appeal to people outside the usual game demographics

My Comments

When I visited the PAX 2019 gaming exhibition in Melbourne, I had noticed a distinct interest and appeal towards the indie game sector as distinct from the mainstream AAA+ games sector. This became of interest thanks to Untitled Goose Game becoming the talk of the town as a strong example. As well, the Victorian Government was using this show to showcase games that are developed locally as part of using public arts funding to support this kind of game development.

Here I had noticed a significant number of approaches to how these games worked. One game I had noticed was what would be called a Web game or browser game that can play within a Web browser. This method was a common approach for online games sites like Miniclip or games offered by Facebook and co as part of their platform.

I had talked to some of the games developers in this class of game and they noticed that the games exhibited modest performance requirements. It was true of the games that were written to be native to the host device’s operating system. This would mean that they could be played on a business laptop or home computer that has an integrated graphics infrastructure and baseline RAM.

But most of the laptops that were being used to play these games were connected to AC power rather than working on battery power. Here, I raised the issue with one of the game developers about their games’ power-requirements and optimising them to run efficiently especially if the laptop is to be run on battery power, and they concurred. One use case regarding power efficiency for games I was thinking of are overseas travellers who want to while away a long flight playing one of these games.

Similarly, these games are able to be played casually. That is to be able to provide enjoyable gameplay over short or long sessions whereas a significant number of popular AAA+ games tend to require long intense playing sessions. As well, a lot of the indie games appeal to a wide audience including those that are easily pushed out of video gaming like women or older people.

The indie games also don’t convey aggressive or highly-competitive ideals which do increase their appeal to parents and others who are concerned about what is conveyed in most of the popular video games on the market. This factor is becoming very important due to an increasing awareness about social values and how popular culture respects them with it impacting on how we consume media.

A situation that a lot of these developers do face when writing their games for the console platforms or porting an existing title that way is the tight requirements. Here, they have to make sure that the game handles all error conditions including if a controller is disconnected mid-play. There is also the requirement for the game to be playable with a handheld controller that uses one or two D-pads, a joystick and mapped buttons.

These points are highlighting the key differences that the indie game scene is about where a distinctly different vibe exists compared to the AAA+ video games offered by the mainstream game publishers. This is very similar to what is seen with film where the art-house and independent movies carry a different vibe to what the Hollywood blockbuster movies offer.

Keeping the indie gaming scene continuously alive and maintaining the existence of standalone independent games studios around the world can then allow for a diverse range of games that appeal to a wide range of tastes.

Amazon starts Voice Interoperability Initiative for voice-driven assistant technology

Articles

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Devices like Amazon Echo could support multiple voice assistants

Amazon Creates A Huge Alliance To Demand Voice Assistant Compatibility | The Verge

Amazon launches Voice Interoperability Initiative — without Google, Apple or Samsung | ZDNet

Amazon enlists 30 companies to improve how voice assistants work together | Engadget

From the horse’s mouth

Amazon

Voice Interoperability Initiative (Product Page)

Amazon and Leading Technology Companies Announce the Voice Interoperability Initiative (Press Release)

My Comments

Amazon have instigated the Voice Interoperability Initiative which, at the moment, allows a hardware or software device to work with multiple compatible voice-driven AI assistants. It also includes the ability for someone to develop a voice-driven assistant platform that can serve a niche yet have it run on commonly-available smart-speaker hardware alongside a broad-based voice-driven assistant platform.

Freebox Delta press photo courtesy of Iliad (Free.fr)

Freebox Delta as an example of a European voice-driven home assistant that could support multiple voice assistant platforms

An example they called out was to run the Salesforce Einstein voice-driven assistant that works with Salesforce’s customer-relationship-management software on the Amazon Echo smart speaker alongside the Alexa voice assistant. Similarly, a person who lives in France and is taking advantage of the highly-competitive telecommunications and Internet landscape there by buying the Freebox Delta smart speaker / router and have it use Free.fr’s voice assistant platform or Amazon Alexa on that same device.

Microsoft, BMW, Free.fr, Baidu, Bose, Harman and Sony are behind this initiative while Google, Apple and Samsung are definitely absent. This is most likely because Google, Apple and Samsung have their own broad-based voice-driven assistant platforms that are part of their hardware or operating-system platforms with Apple placing more emphasis on vertically-integrating some of their products. It is although Samsung’s Android phones are set up to be able to work with their Bixby voice assistant or Google’s Assistant service.

Intel and Qualcomm are also behind this effort by offering silicon that provides the power to effectively understand the different wake words and direct a session’s focus towards a particular voice assistant.

The same hardware device or software gateway can recognise assistant-specific wake words and react to them on a session-specific basis. There will be the ability to assure customer privacy through measures like encrypted tunnelling for each assistant session along with an effort to be power-efficient which is important for battery-operated devices.

Personally I see this as an ability for companies to place emphasis on niche voice-assistant platforms like what Salesforce is doing with their Einstein product or Microsoft with its refocused Cortana product.  It can even make the concept of these voice assistants more relevant to the enterprise market and business customers.

Similarly, telcos and ISPs could create their own voice-driven assistants for use by their customers, typically with functionality that answers what they want out of the telco’s offerings. It can also extend to the hotel and allied sectors that wants to use voice-driven assistants for providing access to functions of benefit to hotel guests like room service, facility booking and knowledge about the local area. Let’s not forget vehicle builders who implement voice-driven assistants as part of their infotainment technology so that the drive has both hands on the wheel and eyes on the road.

This kind of offering can open up a market for the creation of “white-label” voice-assistant platforms that can be “branded” by their customers. As well, some of these assistants can be developed with a focus towards a local market’s needs like high proficiency in a local language and support for local values.

For hardware, the Amazon Voice Interoperability Initiative can open up paths for innovative devices. This can lead towards ideas like automotive applications, smart TVs, build-in use cases like intercom / entryphone or thermostat setups, software-only assistant gateways that work with computers or telephone systems amongst other things.

With the Amazon Voice Interoperability Alliance, there will be increased room for innovation in the voice-driven assistant sector.