Current and Future Trends Archive

Are Siri and Alexa being seen as personal companions?

Article

Is Siri ending up as your personal companion?

Conversations with virtual assistants like Siri, Alexa may be signs of loneliness | First Post

Talking to Siri often? You’re probably lonely | Times Of India

Do YOU rely on your phone for company? Human-like gadgets can offer relief from loneliness in the short term | Daily Mail

Older adults buddy up with Amazon’s Alexa | MarketWatch

My Comments

Hey Siri! Why am I alone now?

A situation that has been drawn out lately is someone feeling comfortable with their iPhone in their hand or sitting at the kitchen table beside an Amazon Echo speaker, trying to build a conversation with Siri or Alexa rather than simply asking something of these voice-driven assistants.

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Is this smart speaker becoming your personal companion?

Here, a Kansas University study found that Siri, Alexa and co are being seen as a short-term panacea for social exclusion and loneliness. This is something that is being brought on by broken relationships or an increasing number of work situations where one is spending significant amounts of time away from their significant other or their normal communities. It is also symptomatic of a loss of community that has come about in this day and age.

It is also worth knowing that older and disabled adults are using Alexa or Google Home as a companion in the context of managing lights, or simply asking for the time or a music source. These devices are deliberately designed to look like other pieces of consumer-electronics or IT hardware rather than the typical bland look associated with assistive devices. They also do serve as an aide-memoire for dementia sufferers but only in early stages of this condition before it becomes worse.

But Siri, Alexa, Cortana and co are not perfect replacements for real-life friends, There is the long-term risk of you losing real human interaction if you rely on them as your companions. Here, you simply keep them serving you as a voice-operated “digital concierge” that helps with finding information or setting up your smart home rather than the be-all-and-end-all digital companion.

Send to Kindle

Multiroom to be another common feature for smart speakers

Article

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Amazon Echo – to become another online multiroom audio system

Amazon Echo multiroom audio not far off, report says | CNet

My Comments

A feature that is showing up with the “smart speakers” that are part of the various voice-driven home assistant platforms is the ability for multiple speakers of the same platform to work as a multiroom system. This is where the same content source can be played through multiple speakers of that same platform, including the ability to have multiple speakers or audio devices in a logical group representing, perhaps, a floor or an area of the house. This functionality is taking on the audio-content playback abilities like Spotify, TuneIn Radio, Tidal and others offered by the various voice-driven home assistant platforms

Google Home – already a multiroom audio system as well as a smart speaker

Google has already established it with their Home speakers and Chromecast-based audio connectivity devices. But Alexa are intending to join in by allowing you to play the same content source through multiple Echo speakers and to treat a group of speakers as a logical unit. Let’s not forget that while the market’s competitive, Apple and Microsoft won’t want to miss out on the idea of multiroom audio as part of their voice-activated home assistant platforms.

Similarly, Amazon have aligned their Alexa platform with DTS’s PlayFi multiroom audio platform which is pitched at the premium hi-fi market. They have a large number of the hi-fi names with them and are wanting to integrate full Alexa voice functionality in to their speakers and other audio devices.

There may be some feature possibilities that may end up in the product evolution for these smart-speaker platforms. One of these could be to set one or more pairs of speakers up as stereo pairs which can yield the improved separation when you listen to stereo content. Similarly, there could be the idea of creating a multiple-microphone array out of a group of speakers to make it easier for the voice-driven home assistant to understand you.

Who knows how hot the competition for the voice-driven home assistant that talks to you is going to be and whether this will mimic the home videocassette format wars of the early 80s?

Send to Kindle

Mixing audio and Bluetooth Low Energy–what is happening

Article

Sony SBH-52 Bluetooth Headphone Audio Adaptor

Audio over Bluetooth Low Energy could make these devices last for a long time on a single battery charge

Apple Used Bluetooth Low Energy Audio for Cochlear Implant iPhone Accessory | MacRumors

My Comments

Any of you who have used Bluetooth headsets with your smartphones may have come across situations where the headset ceases to function or sounds the “low battery” signal when you use these devices a lot. This can happen more so if you are listening to music then make or take a long phone call using the headset and is something I had experienced many times with the Sony SBH-52 audio adaptor. But the audio protocol is being worked on to avoiding consuming too much battery runtime.

Plantronics BackBeat Pro Bluetooth noise-cancelling headphones

.. as it could with Bluetooth headsets

Apple and Cochlear, who are behind the Australian-invented Cochlear Implant hearing-assistance technology, have developed Bluetooth Low Energy Audio to provide a high-quality audio link between mobile devices and headsets but make very little demands on the battery. As well, the Bluetooth Special Interest Group are working on a similar protocol to achieve these same gains, with the goal to have it part of Bluetooth 5.0. But this has to be supported in a vendor-independent manner in the same context as the current Bluetooth audio technologies that are in circulation.

But why is there an imperative to develop a low-energy audio profile for Bluetooth?

One key usage class is to integrate Bluetooth audio functionality in to hearing aids and similar hearing-assistance devices that are expected to run for a very long time. Here, we are also talking about very small intra-aural devices that may sit in or on your ear or be integrated in a set of eyeglasses. The goal is to allow not just for audio access to your smartphone during calls or multimedia activity but even to have an audio pathway from the phone’s microphone to the hearing-assistance device as well as the phone being a control surface for that device.

Similarly, there is a usage goal to improve battery runtime for Bluetooth headsets and audio adaptors such as to avoid the situation I have described above. It can also cater towards improved intra-aural Bluetooth headset designs or lightweight designs that can, again, run for a long time.

Let’s not forget the fact that smartwatches are being given audio abilities, typically to allow for use with a voice-activated personal assistant. But devices of this ilk could be set up to serve full time as a Bluetooth headphone audio adaptor with the full hands-free operation. The expectation here as well could even be to have the display on the wearable active while in use, whether to show the time, steps taken or metadata about the call in progress or whatever you are listening to.

Once audio over Bluetooth Low Energy technology is standardised, it could be a major improvement path for Bluetooth-based audio applications.

Send to Kindle

An Internet-connected laptop could replace the link van for location broadcasting

Article

Dell Inspiron 15 Gaming laptop

A laptop of the Dell Inspiron 15 7000 Gaming class could end up as today’s equivalent of an outside-broadcast link truck if the BBC has its way

BBC livestreaming Edinburgh festival from a laptop | CNet

From the horse’s mouth

BBC

IP Studio whitepaper

My Comments

If you watch news, sports, live concerts or similar content on TV, you will be seeing location broadcasting at work as they bring that footage to your screen.

Typically, a location broadcast that any TV studio does either required the use of video recordings that were transported to the studio before broadcast. Or they relied on a “link truck” which is a vehicle equipped with an antenna that beamed the location signal back to the studio using a microwave link if the idea is to allow the studio access to real-time vision. Some regular venues that are regular sources of content may implement a wired connection facilitated by a leased line between that venue and the studio.

Such setups have been limited by the amount of time required to locate the link truck and establish these radio links. This can also be limited through factors like weather or making sure of a line-of-sight connection to the studio’s link tower.

But the Internet and the size of computer technology has opened up the possibility of a laptop of the same or better calibre as the Dell Inspiron 15 7000 Gaming I recently reviewed connected to a high-bandwidth Internet connection as an alternative to the traditional location broadcast setup, especially for shows directly streamed to the Internet. The technology which is called IP Studio is part of an effort that Auntie Beeb is taking towards implementing IP technology for TV-show production. In this context, it is about being able to use a mobile wireless network to interconnect the cameras to the laptops for location broadcasting.

Here, they are using the Edinburgh Festival and Edinburgh Fringe Festival to prove a setup based around laptops that are uplinking to the Internet via the same kind of technology used for Internet service. The editing software that is being used is primarily a Web-app which works in the browser rather than a standalone app. But it could allow for the link-van approach where a laptop is used to upload the edited vision to the broadcaster’s studio to be inserted in to the program.

Once it is proven, the BBC’s IP Studio technology which is based primarily on open-source tools, could be seen as beneficial for all sorts of broadcasters who want to save money and set-up / tear-down time on and see increased flexibility with their location work. For example, news-gathering teams could be able to set up and “go live” on that breaking event or press conference with a very short lead-time.

The same technology could open up affordable location-broadcasting technology to community broadcasters and specific-event Webcasters who place importance on cost-effective setups. Even organisations who have video-editing talent in their staff or volunteer collection could see this technology appeal to them.

What this is showing is that the Internet and cost-effective computer design is making big-time broadcasting and production technology more affordable.

Send to Kindle

Traditional movie and TV studios shine towards direct video-on-demand services

Article

Apple TV 4th Generation press picture courtesy of Apple

Could the various smart-TV platforms like Apple TV, Chromecast or Roku be required to run “content malls”

Disney Is Dumping Netflix And Launching New Streaming Apps | FastCompany

My Comments

Traditionally, movie and TV production studios have been making their wares available through distributors or other third-party elements.

But what has happened is that they are moving towards running their own video-on-demand services in a similar way to how some manufacturers and first-party distributors have set up their own “direct” mail-order or online storefronts in the US market.

Disney is heading this way by moving their content from Netflix to their own service, something they were announcing as part of a third-quarter earnings call. It may sound close to the established “catch-up TV” model especially where the TV channels are running their own content for a long time on their video-on-demand services. Here, Disney sees this service they are exposing as the online home for some of their content like Toy Story 4 or the live-action version of the Lion King, along with putting their library of TV and movie content onto video-on-demand. Could this mean the ability to “pull up” Disney’s short cartoons that graced Saturday morning children’s TV or the early part of the cinema sessions before and after World War II. Or the ability to see those classic animated feature-length movies that

Here, it is similar to what HBO and CBS have been working on by offering a direct video-on-demand service for their content. But the “studio-direct” model could lead to difficulties in navigating content of a kind offered by multiple studios, as well as requiring potential viewers to sign up with multiple video-on-demand providers. In some cases, it may also affect whether a potential viewer signs on because of the cost of the service or the kind of business model offered by the provider. It may appeal to those of us who shine to a particular studio’s output.

Studio-direct VOD along with niche VOD may then lead to a requirement to establish video-on-demand as a “content mall” facilitated by mobile / desktop-computing platforms and smart-TV / set-top / games-console platforms. This may involve the use of content directories that link to the service providers in a similar vein to TuneIn Radio or vTuner for Internet radio. At the moment, Roku is the only provider who is working close to a “content mall” for their set-top platform, but there will be pressure on other platforms to take this approach.

It is very interesting times ahead for video-on-demand as the traditional “single-front” Netflix model falls away to the mall-based approach.

Send to Kindle

USB 3.2–coming soon to your computer

Article

USB 3.2 to use the same USB Type-C connector as USB 3.1, but with increased throughput

USB-C is already getting a major update, and it will double data transfer speeds | Mashable

My Comments

The USB connection has been recently revised once more, but this time it is about increased bandwidth.

This standard emerges in the form of the USB 3.2 which allows for bandwidths of at least 15Gb/s thanks to the use of multi-lane technology.

It uses the same physical connection standards as USB 3.1, which means that devices equipped to this standard will use USB-C connections and you can connect your compliant host devices to your compliant peripherals using USB-C cables. But this system will work on a “best-case” approach where if both the host and peripheral device are USB 3.2 compliant, you will benefit from the higher throughput whereas in other cases, the link will step back to USB 3.1 specifications.

Once the standard is set in stone, you may find that some devices such as some computer USB interface chipsets may support in-field software-based upgrading for this standard. On the other hand, a subsequent generation of computer and peripheral equipment will end up being equipped for this standard.

The main applications I see this connection come in to its own would be high-capacity external storage applications or high-resolution display setups. But of course, there will be the USB hubs and docks (expansion modules) that are about increased connectivity being equipped with this connection type.

Personally, I would see USB 3.2 become a “next-generation” approach for USB-based peripheral and device connectivity, something to look forward with subsequent generations of computer equipment.

Send to Kindle

NVIDIA offers external graphics module support for their Quadro workstation graphics cards

Articles

Razer Blade gaming Ultrabook connected to Razer Core external graphics module - press picture courtesy of Razer

NVIDIA allows you to turn a high-performance Ultrabook like the Razer Blade in to a mobile workstation when you plant a Quadro graphics card in an external graphics module like the Razer Core

Nvidia rolls out external GPU support for Nvidia Quadro | CNet

NVIDIA External GPUs Bring New Creative Power to Millions of Artists and Designers | MarketWired

From the horse’s mouth

NVIDIA

Press Release

My Comments

Over the last year, there has been a slow trickle of external graphics modules that “soup up” the graphics capabilities of computers like laptops, all-in-ones and highly-compact desktops by using outboard graphics processors. Typically these devices connect to the host computer using a Thunderbolt 3 connection which provides a bandwidth equivalent to the PCI Express expansion-card standard used for desktop computers.

At the moment, this approach for improving a computer system’s graphics abilities has been focused towards gaming-grade graphics cards and chipsets, which has left people who want workstation-grade graphics performance in the lurch.

But NVIDIA has answered this problem by providing a driver update for their TITAN X and Quadro workstation graphics cards. This allows Windows to work with these cards even if they are installed in a “card-cage” external graphics module rather than on the host computer’s motherboard.

Not just that, NVIDIA are to start allowing external-graphics-module manufacturers to tender their products for certification so that they are proven by NVIDIA to allow these cards to work reliably to optimum performance. This may be different to the context of a certified workstation where all the components in that particular computer are certified by Autodesk and similar software vendors to work reliably and perform at their best with their CAD or similar software.

What is being pitched in this context is a “thin-and-light” laptop of the Dell XPS 13 kind (including the 2-in-1 variant);  an “all-in-one” desktop computer like the HP Envy 34 Curved All-In-One or an ultra-compact “next unit of computing” unit like the Intel Skull Canyon being able to do workstation-class tasks with the kind of graphics card that best suits this computing requirement.

The question that some workstation users will then raise is whether the computer’s main processor and RAM are up to these tasks even though a workstation-grade graphics card is added on; and then consider this approach unsatisfactory even though the host computer has a lot of RAM and / or runs with a Core i7 CPU. But something like a gaming laptop that uses a gaming-calibre graphics chipset may benefit from the Quadro in an external graphics “card cage” module when this system is destined to do a lot of video editing, CAD or animation work.

Personally, I see the concept of the Quadro workstation graphics chipset in an external graphics module as a way to allow hobbyists and small-time professionals to slowly put their foot in the door of high-performance workstation computing.

Send to Kindle

Europeans could compete with Silicon Valley when offering online services

Map of Europe By User:mjchael by using preliminary work of maix¿? [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia CommonsVery often I have read articles from European sources about the Silicon Valley companies not respecting European values like privacy. This ends up with the European Commission taking legal action against the powerful Silicon Valley tech kings like Facebook or Google, ending up with placing requirements or levying fines on these companies.

But what can Europe also do to resolve these issues?

They could encourage European-based companies to work on Internet services like Web-search, social networking, file storage and the like that compete with what Silicon Valley offers. But what they offer can be about services that respect European personal and business values like democracy, privacy and transparency.

There has been some success in this field in the aerospace industry with Airbus rising up to challenge Boeing. This was more evident with Airbus releasing the A380 high-capacity double-decker long-haul jet and Boeing offering the 787 Dreamliner jet that was focused on saving energy. Let’s not forget the rise of Arianespace in France who established a competing space program to what NASA offered.

But why are the Europeans concerned about Silicon Valley’s behaviour? Part of it is to do with Continental Europe’s darkest time in modern history where there was the rise of the Hitler, Mussolini and Stalin dictatorships, underscored by Hitler’s Germany taking over significant areas in France and Eastern Europe before the Second World War. This was followed up with the Cold War where most of Eastern Europe was effectively a group of communist dictatorships loyal to the Soviet Union. In both these situations, the affected countries were run as police states where their national security services were conducting mass surveillance at the behest of the country’s dictator.

There are a few of these businesses putting themselves on the map. Of course we known that Spotify, the main worldwide online jukebox, is based in Sweden. But Sweden, the land of ABBA, Volvo, IKEA, Electrolux and  Assa Abloy, also has CloudMe, a cloud-based file-storage service on their soil. It is also alongside SoundCloud, the go-to audio-content server for Internet-based talent, which is based in Germany. The French also put their foot in the IoT space with a smart lock retrofit kit that has Web management with its server based in France.

A few search engines are setting up shop in Europe with Unbubble.eu (German) and StartPage (Dutch) metasearch engines in operation and Qwant and Findx search engines that create their own indexes. But the gaps that I have noticed here is the existence of a social network or display ad platform that are based in Europe and support the European personal and business values.

There are also the issues associated with competing heavily against the Silicon Valley giants, such as establishing presence in the European or global market and defining your brand. Here, they would have to identify those people and businesses in Europe and the world who place emphasis on the distinct European values and know how to effectively compete against the established brands.

The European Commission could help companies competing with the Silicon Valley IT establishment by providing information and other aid along with providing a list of European-based companies who can compete with this establishment. They could also underpin research and development efforts for these companies who want to innovate in a competitive field. It can also include the ability for multiple companies in the IT, consumer-electronics and allied fields to work towards establishing services that can have a stronger market presence and compete effectively with Silicon Valley.

Send to Kindle

Google Home–now in Australia and on your TV

Article

Google Home: Australian Review | Lifehacker Australia

From the horse’s mouth

Google

TV commercial – click or tap to play

My Comments

I have given regular mention of Google Home as a competitor in the smart home assistant space alongside Amazon Echo. At the time I wrote the previous articles, none of these services were available in Australia.

But since this week, Google Home has become the first of the smart home assistant platforms to be released in to Australia. They are offering these devices for AUD$199 at least through JB Hi-Fi, Harvey Norman and Officeworks, and is being linked to most of the ABC live and on-demand audio services along with Sky News and Fox Sports for local news services.

Google have even gone out of their way to promote this device on the TV by using an Australian-localised 30-second version of their Super Bowl TV ad including a question about the noise a kookaburra makes.

But I would suspect that Amazon won’t take this lying down and there will be pressure to make sure that their Echo devices and Alexa platform are on the scene as soon as possible. This would also apply to Microsoft and Apple when they get their home assistant platforms off the ground.

Send to Kindle

Microsoft cuts down the out-of-action time users face during the Windows upgrade cycle

Article

Windows will be improving the time it takes to upgrade the operating system

Microsoft is tweaking the Windows 10 upgrade process to reduce downtime | Windows Central

My Comments

A common situation that we face when we update or upgrade the operating system on our computers including our mobile devices is that the device is out of action for some time while the update take place.

Here, the holy grail for operating-system maintenance is that the updates take action once they are delivered rather than needing us to restart the device at all. In the context of business, this means that workers and business owners are able to stay productive without waiting for an update cycle to complete.

Most of these processes involve downloading the necessary files that contain the newest software code then performing fail-safe procedures. This is before the device is rebooted as a measure to make sure the extant software files are unencumbered before they are replaced with the freshly-downloaded files. Then the update procedure makes sure that everything is in place before allowing the user to interact with the system.

Microsoft identified some of the problems associated with the upgrade cycle associated with their Windows operating system and found that a lot of the preparatory work could take place before the system has to be rebooted.

Previously, the user-created configuration and other data had to be backed up and the operating-system files were prepared for installation after the computer was rebooted to instigated the software-update cycle for Windows. Now, from the Windows 10 Fall (Autumn) Creators Update onwards, these procedures will take place before the system has to be restarted. This is because most of these procedures are simply about copying files between locations on the System Disk.

The data-migration action will take place after the system is restarted and the user data will be restored once all the downloaded files are in place. Then the system will be restarted in order to make sure all the functionality is effective and, like with major functionality upgrades, the user may have to interact with the system further to enable this functionality.

The idea behind this move is to have all the preparatory work done while you are able to work with your computer so that it is out of action for a minimal amount of time.

The question here is whether this improved software-update process will take place for maintenance-level updates like the regular software patches and security updates that are delivered to keep Windows secure.

There is still the issue faced with all of the operating system update procedures, especially with significant updates or where mobile devices are concerned. This is where the update requires the device to be rebooted twice and spends some time out of action during that cycle. It also encompasses the requirement for regular computers to boot at least once while patches and security updates are being deployed.

But Microsoft’s step with improving the software-maintenance cycle for the Windows 10 operating system is getting us one step closer towards cutting down on the downtime associated with this process.

Send to Kindle