Tag: Google Assistant

ARD takes interactive audio content to Germany

Articles (German language / Deutsche Sprache)

Amazon Echo press image courtesy of Amazon

ARD is now on to interactive audio drama for Amazon Echo and similar smart speakers — this time based on Tatort

INFODIGITAL – Der Tatort wird interaktiv: ‘Höllenfeuer’ (infosat.de)

Tatort Interaktiv (ARD Gruppe)

Previous coverage on interactive audio content

BBC introduces interactive radio drama using Alexa

My Comments

The BBC have used their world-famous radio-play craft to create an interactive audio drama that works hand-in-glove with the Amazon Alexa platform. The listeners interact with their Amazon Echo or Alexa-based smart speaker to direct how the story goes.  Here it intermingled the radio-play expertise with those “Choose Your Own Adventure” storybooks or the text-based adventure computer games.

Now the German ARD group of public-service broadcasters have taken on a similar effort but have carried their effort on the back of the “Tatort” crime-drama series that is a mainstay of German-language TV content. The effort would be very similar to the early Police Quest series of crime-themed graphic adventure games that Sierra launched through the late 1980s and early 1990s; or the LA Noire video adventure game released in 2011 and set in late-1940s post-WWII Los Angeles.

They have taken this further by making it work on both the Alexa and Google Assistant platforms including their mobile-platform assistant apps as well as the smart speakers. In addition to this, ARD even provides a Web-based interactive audio adventure so you don’t have to use Amazon Alexa or Google Assistant for this kind of game.

What is peculiar about Tatort is that this German-language crime series has different investigation teams that are each based in different cities or districts within Germany, Austria or Switzerland who solve the cases within that area. Each episode that comes on in the German-speaking countries through their public-service broadcaster on Sunday night 8:15pm local time will appear with a case from a different city.

This interactive audio play, called Höllenfeuer in German or Hellfire in English, has been prepared primarily by Bayerischer Rundfunk (BR) with the help of WestDeutscher Rundfunk (WDR) and uses the Munich-based Tatort crime-investigation team. The player controls the character of Kommissarin Mavi Fuchs who works alongside Kriminalkommissar Kalli Hammermann to solve the case. You have to use voice commands to direct these protagonists in the interactive audio play which can be replayed if you are trying to “get the grip of it” further.

But what is happening is that some broadcasters are discovering the idea of mixing radio plays with interactive elements to provide the audio equivalent of classic adventure computer games. Then they are linking these products with voice-driven assistant platforms. The approach ARD have taken with Tatort is to use this new form of delivering audio content effectively to take their existing intellectual property, especially a tentpole TV show, further.

It can be seen as a way to take am existing content franchise further and implement it in a new form, especially an interactive audio play. This is more so as the smart speakers and other voice-driven-assistant devices become more popular.

The two-box voice-driven home assistant setup is being made real with Bluetooth

Article

Bang & Olufsen Beosound A1 Bluetooth smart speaker press image courtesy of Bang & Olufsen

Bang & Olufsen Beosound A1 2nd Generation Bluetooth smart speaker that works with a smartphone or similar devicce to benefit from Amazon Alexa

B&O Beosound A1 (2nd Gen) Announced With Alexa Integration | Ubergizmo

My Comments

At the moment, there is the latest generation of the Bose QuietComfort 35 noise-cancelling Bluetooth headset that implements a software link with the Google Assistant voice driven personal assistants through its own app. Now Bang & Olufsen have come up with the Beosound A1 Second Generation battery-operated Bluetooth speaker that has integration with the Amazon Alexa voice-driven home assistant platform.

But what are these about?

Bluetooth smart speaker diagram

How the likes of the B&O Beosound A1 work with your smartphone, tablet or computer to be a smart speaker

These are purely Bluetooth audio peripherals that connect to your smartphone which links with the Internet via Wi-Fi or mobile broadband. This is usually facilitated with a manufacturer-supplied app for that device that you install on your smartphone or tablet. You will also have to install the client software for the voice-driven assistant platform if your smartphone or tablet doesn’t have inherent support for that platform.

The Bose solution primarily used their app to “map” a secondary function button on the headset to activate Google Assistant. Then the B&O approach had the Beosound A1 and your smartphone or similar mobile-platform device work together as if it is an Amazon Echo.

Why do I see this as a significant trend for “smart-speaker” and allied device use cases, especially as Google, Amazon and the Voice Interoperability Initiative want to extend their voice-driven assistant platforms to setups based around Bluetooth audio peripherals. Here it underscores the reality that the highly-capable host devices will have Internet connectivity via a mobile-broadband connection or a local-area network.

One is to allow manufacturers to provide a highly-portable approach towards using Alexa or Google Assistant while on the move. Similarly, this approach will appeal to those in the automotive and marine infotainment sector with the idea of end-users bringing their own Internet connection with them while in their car or boat but wanting to use their preferred voice-driven assistant platform there.

Some technology manufacturers may look at the idea of a two-piece setup with a specially-designed Bluetooth speaker that links with a device that is normally connected to the Internet like a set-top box or router and both devices working in a smart-speaker capacity. Here, it can be about a cost-effective smart-speaker platform or to enable the use of battery-operated devices that use battery-efficient technologies.

After what Bose and B&O are doing, it could be about bringing the idea of a two-box smart-speaker setup for voice-driven assistant platforms opening up some interesting pathways.

Google Nest Mini uses edge computing to improve search performance

Articles

Google Nest Mini smart speaker press picture courtesy of Google

The Google Nest Mini smart speaker – a follow on from the Home Mini smart speaker and having its own local processing to improve Google Assistant’s responsiveness

Google Nest Mini gets louder and gains onboard Assistant processing | SlashGear

Google debuts Nest Mini with wall mount and dedicated ML chip | VentureBeat

From the horse’s mouth

Google

Nest Mini

Nest Mini brings twice the bass and an upgraded Assistant (Product Blog Post)

My Comments

The Google Nest Mini smart speaker, which is the successor to the Google Home Mini smart speaker, shows up a significant number of improvements including a richer sound. But it has also come about with the idea of locally processing your voice commands for better Google Assistant performance.

The traditional approach to processing voice commands that are said to a smart speaker or similar device is for that device to send them out as a voice recording to the cloud servers that are part of the voice-driven assistant platform. These servers then implement their artificial-intelligence and machine-learning technology to strip background noise, interpret the commands and supply the appropriate replies or actions back to that device.

But Google has improved on this by using a leaf out of the book associated with edge-computing technology. This is where some of the data storage or processing is performed local to the user before the data is sent to a cloud computing system. Here, Google uses a dedicated machine-language processor chip in their Nest Mini smart speaker to do some of the command processing before sending data about the user’s command to the Google Assistant cloud system.

It reduces the idea of your Google Nest Mini smart speaker being a simple conduit between your home network and the Google Assistant cloud. The key benefit is that you see a quicker response from the Google Assistant via that device. You also have the benefit of reducing the Internet bandwidth associated with handling the voice-driven home assistant activity, avoiding reduced performance for online gaming or multimedia streaming.

Google is working on taking this further with having Google-Assistant-based devices that have this kind of local processing process logic associated with user requests and programmable actions locally. It also includes keeping the logic associated with the Assistant liaising with other smart devices local to your home network, allowing for improvements to performance, user privacy and data security.

It could be seen by Amazon and others as a way to improve the performance of their voice-driven home-assistant platforms. This is more so where the competition between these platforms becomes more keen. As well, there could be a chance for third-party Google Assistant (Home) implementations to look towards using local processing to improve the Assistant’s response.

An issue that will crop up is having multiple devices that have this kind of local processing existing on the same home network help each other to increase the voice-driven assistant’s performance. This can also include using a software approach to make the devices equipped with the local processing provide improved performance for those that don’t have this processing. It will be an issue with the likes of Google Nest Mini and similar entry-level devices that appeal to the idea of having many installed around the house, along with the idea of equipping smart displays with this kind of local processing.

What I see of this is that the use of edge-computing technology is coming to the fore as far as improving responsiveness in the common voice-driven home-assistant platforms.

Using Google Assistant as part of an in-home-care service

Article

The Google Home now part of ageing-at-home and working with a home healthcare service

Feros Care plugs into Google Assistant to boost seniors’ independence | IT News]

My Comments

The technology industry is working on making themselves relevant to the “ageing-at-home” sector where senior citizens, including the ageing Baby Boomer population, can live in their own homes or in supported accommodation but preserve their own privacy and dignity. But the goal of improved dignity for these seniors includes using the technology that doesn’t look out of place in an ordinary home environment.

This system, ran by Feros Care, implements Google Assistant technology as a base platform and uses Google Home smart-speaker devices as a voice-driven interface with the client who the agency is looking after. It also facilitates visual display through the Android TV smart-TV platform and the Google Home Hub smart display.

This is facilitated through the development of Google Actions and DialogFlow natural-language processing with some custom application-programming-interface (API) software “hooks” to work with the agency’s MyFeros IT portal. It provides the client access access to details about carer appoints, further assistance amongst other things while the MyFeros portal captures service-provider to client interactions.

It is more about allowing senior citizens who use this agency for assisted living to manage their experience with the agency themselves and maintain their independence.

The use of the Google Home / Assistant voice-interaction technology can work around situations where the senior has had a fall and cannot gain access to the phone to summon help. Similarly it works well when they are recovering in bed and don’t have a tablet or phone at their bedside. The Android TV / Home Hub smart-TV technology can be used to show up visual information like details of alternate carers who are “filling in” for a regular carer who is ill or on leave and cannot attend

Even smart-lock technology is coming in to play in order to allow staff who are rostered on to care for a particular client access to that client’s home for the duration of their shift. This is due to older people with limited mobility taking a long time to reach their front door to admit the carer in to their home. The smart-lock integration will also work in hand with “visit-verification” requirements that will be demanded within the home-based healthcare industry thanks to various health-insurance or public-healthcare requirements.

Feros Care underscores that the technology is not about staff efficiency and productivity by to serve the needs of their service’s end-users and protect their dignity and independence.

But what I like about this approach is that they aren’t reinventing the wheel in implementing this technology and having to implement new devices for their field of work. Rather they use common “horizontal-market” technology like Google-Home-compatible smart speakers and smart displays compliant with Google’s smart-display technologies – such equipment able to be purchased “off the shelf” at any consumer electronics outlet and blend in to an ordinary home.

I also see Feros Care in a position to offer the necessary software logic as a “white-label” solution for all sorts of home healthcare agencies, supported-housing facilities and the like who want to implement it in their client-carer IT-portal setups. But there will be issues like adapting to other consumer-focused voice-driven home assistant platforms like Alexa, along with making it work with the widest range of home-automation devices.

Here, it is about implementing whatever common home-networking technology as part of assisted-living simply through using software to provide this kind of integration.

Ultrasound being used as a way to measure user proximity to gadgets

Article – From the horse’s mouth

Google Nest

How ultrasound sensing makes Nest displays more accessible {The Keyword blog post)

My Comments

Google is implementing in their Nest Hub smart-display products an automatic display-optimisation technology that is based on technology that has been used for a very long time.

Ultrasonic technology has been used in various ways by nature and humans to measure distance. In nature, bats and dolphins which don’t have good vision use this approach to “see” their way. It is used extensively in military and civillian marine applications to see what is underneath a boat or around a submarine and is also used as a common medical-imaging technique.

As well, in the late 1970s, Polaroid implemented ultrasound as part of their active autofocus system, which ended up as a feature for their value-priced and premium instant-picture cameras. Here, this was used to measure the distance between the camera and the subject in order to adjust the lens for proper focus. There were limitations associated with the technology like not being able to work when you photograph through a window due to the ultrasonic waves not passing through the glass.

But Google has implemented this technology as a way to adjust the display on their Nest Hub smart displays for distant or close operation. The front of a Google Nest Hub has an ultrasonic sensor that works in a similar way to what was used in a Polaroid auto-focus instant-picture camera.

But rather than the Polaroid setup being about using the distance measurement from the ultrasonic sensor to adjust a camera’s lens, this application adjusts the display according to the user’s distance from the Nest Hub. If you are distant from the Nest Hub, you would see reduced information but the key details appear in a larger typeface. Then if you come closer to the Nest Hub, you would see more detail but at a smaller typeface.

Nest Hub Directions display press picture courtesy of Google

The Google blog article described this as being suitable for older users and those of us who have limited vision. The fact that you have the ability to see key information in a large typeface at a distance can make the Nest Hub accessible to this user group. But others can’t see deeper information unless they are very close to the device.

End-user privacy is still assured thanks to the use of a low-resolution distance-measurement technology whose results are kept within the device. As well, there is a menu option within the Google Home app’s Device Settings page to enable or disable the feature.

At the moment, it is initially being used for timer and current-time display as well as displaying travel time and traffic conditions for a planned journey that you set up with Google Maps. But Google and other software developers who develop for the Google Home ecosystem will add distance-sensitive display functionality to more applications like appointments and alerts.

Some people could see this technology not just for optimising the readout on a smart display but could even be used to ascertain whether people are actually using these devices. This could then be used for such functionality like energy-saving behaviour where the display turns off if no-one’s near it.

But what Google has to do is to license out this technology to allow others to implement it it to other fixed-display-based devices. Here, it could become of more use to many who don’t go for a Google Nest Hub.

but to add more functionality like appointments, alerts, reminders

You can manage your Google Home’s lighting and volume for night-time use

Article

You can use the Google Home app to manage how your Google Home smart speaker works during the evening and night hours for a better night’s sleep

Google Home Has A Hidden Night Mode (And We Love It) | Lifehacker

From the horse’s mouth

Google

Manage volume and LED brightness with Night mode (Instruction sheet)

My Comments

Some of you may want to interact with a Google Home smart speaker during the wee hours of the day such as to ask for the time or weather. Or you may touch the device and work it as a night-light. But it can be too bright or loud at these times in a way that people can be woken up at odd hours. Here, some users have to adjust the volume to avoid this risk of disturbance.

But Google has a “Night Mode” feature that allows you to determine the maximum volume and device lighting brightness during times when you don’t want to be disturbed.

Here, you have to use the Google Home mobile-platform app on your smartphone or tablet. As well, your mobile device has be on the same logical network as the Google Home device, which would typically be the same Wi-Fi network when you are thinking of your home network.

When you open the Google Home app, tap on the gear-shape Settings icon and you will see the “Night Mode” setting in the list of settings. There is a toggle switch to enable or disable this mode, and when you enable this mode, the LEDs on the top of your Google Home device will dim while the maximum speaker volume will be softer.

There is an option to schedule the times and days of the week when the Night Mode feature will be active. This may be of importance if you want to make sure it comes in to play on weeknights for example.

There are settings to determine the maximum speaker volume and lighting brightness that will apply to your Google Home smart speaker while it is in the “Night Mode” condition.

The Do Not Disturb option on the Night Mode settings page mutes any notification or system sounds that your Google Home smart speaker makes. But timer or alarm signals will “break through” this setting so you don’t miss that extra alarm you set to wake you up so you can pick up that loved one from the airport as they come off that late flight.

But I am not sure whether these settings can be configured for individual devices or all of the devices bound to the same account. This may be of importance if you want to reduce the volume and lighting brightness on units installed in the bedrooms while one or more units installed in the common living areas are maintained at bright levels; or you want to maintain a common setting across your home.

A feature that can improve on this would be to have the LEDs on top of a Google Home device stay alight but at the maximum brightness to allow you to use the device as a night light. This is more so for those of us who would keep one of these devices in the corridor near the main bathroom or within that bathroom to serve as a “beacon” night-light to enable use of the bathroom at night.

At least Google has provided an option to allow the Google Home device family to work properly without disturbing other people’s sleep at night.

Celebrity voices to become a new option for voice assistants

Article

How to Make John Legend Your Google Assistant Voice | Tom’s Guide

Google Assistant launches first celebrity cameo with John Legend | CNet

How to make John Legend sing to you as your new Google Assistant voice | CNet

From the horse’s mouth

Google

Hey Google, talk like a Legend {Blog Post)

Video – Click or tap to play

My Comments

Google is trying out a product-differentiating idea of using celebrity voices as an optional voice that answers you when you use their Google Assistant.

This practice of using celebrity voices as part of consumer electronics and communications devices dates back to the era of telephone answering machines. Here, people could buy “phone funnies” or “ape tapes” which featured one-liners or funny messages typically recorded by famous voices such as some of radio’s and TV’s household names. It was replaced through the 90s with downloadable quotes that you can use for your computer’s audio prompts or, eventually, for your mobile phone’s ringtone.

Now Google has worked on the idea of creating what I would call a “voice font” which uses a particular voice to annunciate text provided in a text-to-speech context. This is equivalent to the use of a typeface to determine how printed text looks. It also encompasses the use of pre-recorded responses that are used for certain questions, typically underscoring the particular voice’s character.

The technology Google is using is called WaveNet which implements the neural-network and machine-learning concept to synthesise the various voices in a highly-accurate way. But to acquire the framework that describes a particular voice, the actor would have to record predefined scripts which bring out the nuances in their voices. It is part of an effort to provide a natural-sounding voice-driven user experience for applications where the speech output is varied programmatically such as voice-driven assistants or interactive voice response.

At the moment, this approach can only happen with actors who are alive and can come in to a studio. But I would see WaveNet and similar technologies eventually set up to work from extant recordings where the actor isn’t working to a special script used for capturing their voice’s attributes, including where the talent’s voice competes with other sounds like background music or sound effects . By working from these recordings, it could be about using the voices of evergreen talent that had passed on or using the voices that the talent used while performing in particular roles that underscored their fame. A good example of this application are the actors who performed in those classic British TV sitcoms of the 1970s or using Peter Sellers’, Spike Milligan’s, Harry Secombe’s and Michael Bentine’s voices as they sounded in the Goon Show radio comedy.

Google is presenting this in the form of a special-issue “voice font” representing John Legend, an actor and singer-songwriter who sung alongside the likes of Alicia Keys and Janet Jackson. Here, it is being used as a voice that one can implement on their Google Home, Android phone or other Google-Assistant device, responding to particular questions you ask of that assistant.

Amazon and others won’t take this lying down especially where the voice-driven assistant market is very competitive. As well, there will be the market pressure for third parties to implement this kind of technology in their voice-driven applications such as navigation systems in order to improve and customise the user experience.

Third-party Google Assistant smart displays start benefiting from Google Home Hub features

Article

Lenovo Smart Display now to be the same level as the Google Home Hub

Lenovo to Send Out Update to Smart Displays That Has All the Google Home Hub Goodies | Droid Life

Lenovo updates Smart Display with Google Home Hub features | Engadget

From the horse’s mouth

Lenovo

New features coming to Smart Display (Oct 22nd) {Forum Post – Press Release)

My Comments

Google recently launched their Home Hub smart display which is a “first-party” effort to offer more that what their Google-Assistant-based third-party smart displays offer.

This included multiroom audio functionality with Google’s Chromecast, Home smart speakers and other similar devices; Live Album display for the Google Photos application; tight integration with the Nest Hello smart doorbell; a dashboard user interface for your compatible Internet-of-Things devices on your home network; amongst other features.

But they have started rolling out extra software code to third-party Smart Display manufacturers to open up these extra features to their Google-based smart-screen devices. The first of these to benefit from this update are the Lenovo Smart Displays which will benefit from a firmware update (version 3.63.43) to be rolled out from October 22nd 2018.

The firmware will be automatically updated in your Lenovo Smart Display and you can check if it is updated through the Settings menu. Here, you have to “swipe up” from the bottom of your Lenovo Smart Display’s screen to expose the Settings icon, which you would tap to bring up the menu.

The question that will surface for others with similar Google-based smart displays like the JBL Link View would be if and when the display’s manufacture will roll out the firmware update for their devices. It is something that is similarly happening with the Android mobile-device platform where the Google first-party devices have that latest software updates and features first while third-party devices end up with the software a few months later. This is ostensibly to allow the device manufacturer to “bake in” their user interface and other features into the package.

But could the Google-based Assistant / Home platform simply end up as the “Android” for voice-driven smart-display devices?

Google Assistant has the ability to break the bad news cycle

Article

Google’s Assistant is here to give you a break from the horrible news cycle | FastCompany

From the horse’s mouth

Google

Hey Google, tell me something good (Blog Post)

Video – Click or tap to play

My Comments

The news cycle that you hear in the USA has been primarily focused on bad news especially with what President Trump is up to or some natural disaster somewhere around the world. It is a very similar issue that is happening around the world. A common issue that is drawn out regarding this oversaturation of bad news is that it can bring about fear, uncertainty and doubt regarding our lives with some entities taking advantage of it to effectively manipulate us.

Some of us follow particular blogs or Facebook pages that contain curated examples of good news that can break this monotony and see solutions for the highlighted problems. But Google is extending this to a function they are building in to the Google Assistant platform with stories that are curated by people rather than machines and, in a lot of cases, derived from a variety of media sources. But this is facilitated by the Solutions Journalism Network non-profit which is more about solution-focused media coverage.

Of course, there will be the doubters and skeptics who will think that we aren’t facing reality and are dwelling in the “Hippie Days” and the “Jesus People” era of the 1960s and early 1970s. But being able to come across positive solutions for the various problems being put forward, including people working “outside the box” to solve that problem can inspire us.

This is a feature is offering on an experimental basis through the USA only and can be used on your Google Home or other Google-Assistant devices. But as this application is assessed further, it could be easily made available across more countries.

JBL Link View Google-powered smart speaker up for pre-order

Articles JBL Link View lifestyle press image courtesy of Harman International

JBL Link View now up for preorder as the next Amazon Echo Show competitor | CNET News

JBL’s Google-powered smart display launches next month for $250 | The Verge

JBL’s Google-powered smart display is available for preorder | Engadget

JBL Link View Google Assistant smart display up for pre-order, ships September 3rd | 9 to 5 Google

From the horse’s mouth

JBL

Link View (Product page – link to preorder)

My Comments

The Amazon Echo Show is just about to face more competition from the Google Assistant (Home) front with JBL taking advance orders for their Link View smart speaker. This is although Lenovo has just started to roll out a production run of their Smart Displays which are based on the Google Assistant (Home) platform.

JBL have taken advance orders on this speaker since Wednesday 2 August 2018 (USA time) with them costing USD$250 a piece. They expect to have them fully available in the US market by September 3 2018 (USA time). The display on this unit serves the same purpose as the one on the Lenovo Smart Displays where it simply augments your conversation with Google Assistant using a visual experience.

These units look a bit like a boombox or stereo table radio and have an 8” high-definition touch screen along with two 2” (51mm) full-range speakers separately amplified and flanking the screen for stereo sound reproduction. Here, this traditional approach with the stereo speakers at each end of the device leads towards better perceived stereo separation. CNET saw this as offering more “punch” for music content compared to other “smart-display” devices that they experienced.

There is the camera to work with Google Duo but this device has also been designed to take care of user privacy needs thanks to a privacy shutter over the camera along with a microphone mute switch.

Like other Google Assistant (Home) devices, the JBL Link View can work as a wireless speaker for Chromecast Audio and Bluetooth links from mobile devices.

This is the start of something happening with the Google Assistant (Home) platform where the devices being offered by Lenovo and JBL are offering more than what Amazon are currently offering for their smart displays. It includes the stereo speakers for the JBL Link View along with larger displays for both the Lenovo and JBL products. LG and Sony are intending to launch their Google-powered smart displays soon but I don’t know when.

Personally, I would see Amazon and Google establishing a highly-competitive market for smart speakers and allied devices especially if both of them answer each other with devices of similar or better standards. As well, licensing the Alexa and Google Assistant (Home) standards to third-party consumer-electronics companies will also open up the path for innovation including incremental product-design improvements.