Tag: voice-driven home assistant

G’Day! Alexa has been taught Australian slang

Article Australian flag

Alexa partners with The Betoota Advocate (mumbrella.com.au)

Betoota Teaches Alexa Aussie Slang – (smarthouse.com.au)

Alexa Looks To Expand Her Knowledge Of Australia By Teaming Up With The Betoota Advocate – B&T (bandt.com.au)

My Comments

Australia does have its own slang and culture which has been celebrated through Australian films and television like “Crocodile Dundee” or “Neighbours”; or the 1980s Paul Hogan “Throw A Shrimp On The Barbie” ad. There was even a book called “G’Day Teach Yourself Australian” (Amazon link) which conveyed the look of a foreign-language courseware book but taught Australian slang and culture to English-speaking travellers in a humourous way. Even the current popularity of “Bluey” amongst families in other countries is putting Australian culture increasingly on the map.

Amazon Echo press image courtesy of Amazon

Amazon Alexa is now learning Australian slang and culture

But the voice-driven assistants like Amazon Alexa or Google Assistant weren’t taught that kind of information. This made it difficult for Australians to use these assistants in a manner that is comfortable to them.

A previous approach to supporting dialects within a language including regional dialects was the BBC’s effort at a voice assistant. This responded to British English and even supported the various regional accents and dialects used within various parts of the UK. But it has been focused towards access to its own content and currently isn’t able to work with other voice-assistant platforms as a linguistic “module”.

Now Amazon have worked with the Betoota Advocate to “teach” Alexa about Aussie slang and culture. It is not just the slang and colloquial speech that she had to understand but items relating to Australian life and culture. For example, being able to answer which AFL or NRL club won their respective code’s Grand Final or to summon up the latest Triple J Hottest 100 as a playlist.

In the case of the football Grand Finals, there may be an issue about which football code is referred to by default when you ask about the winner of one of these penultimate matches and don’t identify a particular code. This is because of New South Wales and Queensland “thinking of” the NRL rugby-league code while the other States think of the AFL Australian-Rules code.

It could be even something like “How do I pay the rego on the ute” which could lead you to your State government’s motor registration office or, if they support it, instigate the workflow for paying that vehicle registration.

Australians and foreigners can even ask Alexa the meaning of a particular slang term or colloquialism so they can become familiar with the Australian vernacular. This would be required of Alexa anywhere in the world especially if you are talking with Australian expats or finding that a neighbourhood is becoming a “Little Australia”. Or if you are from overseas and show interest in Australian popular culture, you may find this resource useful.

A feature that may have to come forward for this Australian-culture addition to Amazon Alexa is to support translation of Australian idioms to and from languages other than English. This is more so where Australian culture is being exposed in to countries that don’t use English as their primary language or where these countries acquire a significant Australian diaspora. An example of the first situation is the popularity of MasterChef Australia within the Indian subcontinent and the existence of Australians within Asian and European countries.

This addition of Australian slang and culture to Alexa is available to all devices that support the Amazon Alexa voice-driven assistant. This ranges from Amazon-designed equipment like the Amazon Echo smart speakers to third-party devices that implement Amazon Alexa technology.

At least this is an example of how a voice-driven assistant provider can work towards courting countries and diasporas that are being seen as viable. It may have to be about encouraging the use of modular extensions to enable voice-driven assistants to work with multiple languages, dialects and cultures.

How can you use Amazon Alexa to measure room temparature

Article

Amazon Echo press image courtesy of Amazon

These new Echo speakers can work as temperature sensors for a room

How to Get an Amazon Echo to Tell You a Room’s Temperature (lifehacker.com)

My Comments

Newer Amazon Echo smart speakers are being equipped with room-temperature sensors that contribute this data to the Alexa smart-home subsystem.

Here, the devices you need to use in the rooms you want to measure the temperature of are:

  • Amazon Echo 4th Generation (spherical) or newer generation
  • Amazon Echo Plus 2nd Generation (cylindrical) or newer generation

To confirm your Amazon Echo device’s room-temperature measuring ability, you need to open the Alexa app or http://alexa.amazon.com and log in to your Amazon account. Then you go to “Devices”, then “Echo & Alexa” and select the name of your Echo smart speaker that you want to verify. Here, you need to look for the “Temperature Sensor” field which will come up with the current room temperature if your Echo speaker is suitably equipped.

Each Echo device that you want to use as a temperature sensor has to be given a unique room name. Then to ask Alexa for the current room temperature of a particular room, you say “Alexa, what is the temperature for <desired room name>?”

There are limitations with this setup at the moment. You can’t ask for a house-wide indoor temperature or the indoor temperature of a room cluster like upstairs. This is because Amazon hasn’t worked out what way whether to assess the room temperature of an area covered by multiple devices as an average or what other way. Nor have they added the necessary logic to do so.

But you can create a temperature-based routine that works with this temperature for the Alexa smart home. For example, you may have a fan or heater come on if the room reaches or falls below a minimum temperature. This may be a situation where you don’t have an occasionally-used room that isn’t part of your central HVAC setup and you use portable heating or cooling equipment for this purpose.

Or you want to be alerted if a room of yours falls below a critical temperature level so you can undertake procedures to mitigate frost or pipes freezing up.

What Amazon will need to do for Alexa in relation to this is to make this more useful is to allow averaging of multiple temperature sensors so you can measure areas larger than a room. As well, it could cater to environments where you have multiple suitably-equipped Echo speakers in one room like in a large kitchen / dining area for example.

A Sharp Alexa-enabled microwave could be about task-driven cooking

Article

Sharp Smart Countertop Microwave Oven press picture courtesy of Sharp USA

Sharp’s first smart countertop microwave oven features Wi-Fi connectivity and certified Works with Alexa compatibility for hands-free operation using voice commands.

Sharp’s New Alexa-Powered Microwave Is Even More Confusing Than Amazon’s | Gizmodo

From the horse’s mouth

Sharp USA

Sharp Launches its First Smart Countertop Microwave Ovens (Press Release)

My Comments

Most of us who use a microwave oven tend to specify cooking times and power intensities for each cooking job. This is even though most of today’s microwave ovens use job-specific cooking functions that are available to us. But some of us may decide to use a “popcorn” cooking function to cook most microwave popcorn.

These functions can confuse most of us due to different approaches to invoking them that exist between different makes and models of microwave oven.  As well, other differences that will crop up include how long these tasks are expected to take. It is also analagous to working from any recipes that are part of your microwave oven’s documentation, because these may not work out correctly if you end up using a different appliance.

Here, this issue will be considered important as more of us place value on the microwave as a cooking option for something like, perhaps, those green vegetables. It can also bamboozle anyone who uses traditional cooking techniques like the conventional oven but finds themselves in a situation where they have to primarily rely on the microwave oven for cooking needs like when they stay in a serviced apartment or AirBnB.

Amazon had released to the US market the AmazonBasics microwave that works with their Alexa voice-assistant ecosystem. But this is seen as an elementary appliance, answering most common cooking tasks. Sharp has now come to the fore with two of their microwaves that are released to the US market.

Here, the difference is to use Alexa as a gateway to the advanced cooking tasks that these microwaves offer. The press release talked of us saying to an Amazon Echo device “Alexa, defrost 2 pounds of meat” and the microwave will be set up to thaw out two pounds of frozen meat. The larger model of the two will have the ability for you to ask Alexa to set the microwave up for something like cooking broccoli or other veggies.

I see this as being about using voice assistant platforms to open up a common user interface for the advanced microwave-cooking tasks that your microwave would offer. But for this to work effectively, the user needs to know what the expected cooking time would be for the task and when they need to intervene during the cooking cycle.

As well, more of the voice assistant platforms need to come on board for this approach to advanced microwave cookery. Let’s not forget that the display-based voice assistants can even come in to their own in this use case such as to list ingredients and preparation steps for what you intend to cook.

Here, the voice assistants will become a way to lead users to use the microwave beyond reheating food, melting butter and chocolate or cooking microwave popcorn/

Amazon’s next generation of Echo devices to use edge computing

ArticlesAmazon Echo press image courtesy of Amazon

New Amazon Echo devices

Everything Amazon announced at its September event | Mashable

Amazon hardware event 2020: Everything the company announced today | Android Authority

Use of edge computing in new Echo devices

Amazon’s Alexa gets a new brain on Echo, becomes smarter via AI and aims for ambience | ZDNet

From the horse’s mouth

Amazon

Introducing the All-New Echo Family—Reimagined, Inside and Out (Press Release)

New Echo (Product Page with ordering opportunity)

New Echo Dot (Product Page with ordering opportunity)

New Echo Show 10 (Product Page with ordering opportunity)

My Comments

Amazon are premiering a new lineup of Echo smart-speaker and smart-display devices that work on the Alexa voice-driven home assistant platform.

These devices convey a lot of the aesthetics one would have seen in science-fiction material or “future living” material written from the 1950s to the 1970s. It is augmented by an indoor camera drone device that Amazon released around the same time.

As well, all of these devices have the spherical look that conveys that retro-futuristic industrial-design style that was put forward from the 1950s to the early 1970s like with the Hoover Constellation vacuum cleaner of the era or the Grundig Audiorama speakers that were initially designed in the 1970s thanks to the Space Race. You might as well even ask Alexa to pull up and play space-age bachelor-pad music from Spotify or Amazon Music through these speakers.

It is even augmented further with the base of the Echo and Echo Dot lighting up as a pin-stripe to indicate the device’s current status. This industrial design also permits the implementation of a 360-degree sound approach that can impress you a lot. It also is a smart-home hub that works with Zigbee, Bluetooth Low Energy and Amazon Sidewalk devices so you don’t need to use a separate hub for them.

Amazon Echo Dot Kids Edition (Tiger and Panda) press image courtesy of Amazon

Amazon Echo Dot Kids Edition that is available either as a panda or tiger – for the young or young at heart

The small Echo Dot comes in two different variants where one of these has a clock on the front while the other doesn’t. It also comes also as a “Kids’ Edition” with an option of a panda face or a cat face. It is offered as part of Amazon’s Alexa Kids program which provides a child-optimised package of features for this voice assistant. But I also wonder whether this can be ran as a regular Echo Dot device, which may appeal to those adults who are young at heart and want that mischievous look these devices have.

The Echo Show 10 smart display uses a microphone array and automatic panning to face the user. This is driven by machine vision driven by the camera and microphone array. But the camera has a shutter for your privacy. Of course you can use this device as a videophone thanks to the Alexa platform’s support for Amazon’s calling platforms, Zoom and Skype.

Amazon Echo Show 10 press image courtesy of Amazon

Amazon Echo Show 10 that can swivel towards you

What makes this generation of Echo devices more interesting is that they implement an edge-computing approach to improve sound quality and intelligibility when it comes to interacting with Alexa. This is even opening up ideas like natural-flow conversations with Alexa or allowing Alexa to participate as an extra person in a multiple-person conversation. It is furthering Amazon’s direction towards implementing ambient computing on their Alexa voice-driven assistant platform.

But Google was the first to implement this concept in a smart-speaker / voice-driven assistant use case. Here, they used it in the Nest Mini smart speaker to improve on the Google Assistant’s intelligibility of your commands.

Amazon Echo Show 10 in videocall - press image courtesy of Amazon

Oh yeah, you can make and take Zoom or Skype videocalls on an Amazon Echo Show 10

I do see this as a major direction for smart-speaker and voice-driven-assistant technology due to improving responsiveness and user interaction with these devices. It may even be about keeping premises-local configurations and customisations on the device’s memory rather than on the cloud, which may improve a lot of use factors. For example, it may be about user privacy due to minimal user data held on remote servers. Or it could be about an optimised highly-responsive setup for the home-automation setups we build around these devices.

What needs to be looked at is a way to implement localised peer-to-peer sharing of data between smart speaker devices that are on the same platform and are installed within the same home network. This can allow users to have the same quality of experience no matter which device they use within the home.

There also has to be support for localised processing of data by devices with the edge-computing smarts for those devices that don’t have that kind of operation. This would be important if you bring in a newer device with this functionality and effectively “push down” the existing device to a secondary usage scenario. In this use case, having another device with the edge computing smarts on your home network and bound to your voice-driven-assistant platform account could assure the same kind of smooth user experience as using the new device directly.

These Amazon Echo devices are showing a new direction for voice-driven home assistant devices to allow for improved intelligibility and smoother operation.

Amazon Alexa to support voice-activated printing

Article

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Amazon has improved on the way you can order documents to be printed via the Echo or Alexa-compatible device

Amazon launches Alexa Print, a way to print lists, recipes, games and educational content using your voice | TechCrunch

From the horse’s mouth

Amazon Alexa

Introducing: Alexa Print (Product Page)

What Can I Print (Product Page with Key phrases)

My Comments

Initially, Amazon partnered with HP to offer voice-activated document printing. That is where you could ask Alexa to print out colouring pages, sudoku puzzles, ruled paper and the like. But this tied HP’s ePrint documents-on-demand ecosystem to the Amazon Alexa voice-driven home assistant platform and limited this feature to HP ePrint-capable network printers. Some other manufacturers then bound their online printing functionality to Amazon Alexa so as to provide some form of voice-driven printing functionality.

Brother DCP-J562DW multifunction printer positioning image

.. even through printers like this Brother DCP-J562DW multi-function printer

Now Amazon evolved this feature to work with any network printer that supports IPP-based driver-free printing. That is usually a machine that supports Apple AirPrint or the Mopria driver-free printing protocols, which encompasses most of the printers made over the last five years. Here, the documents would be held on or constructed by Amazon’s servers rather than on HP’s servers.

To get going, you have to say “Alexa, discover my printer” to get started. This would have your Amazon Echo or similar Alexa-capable device discover any network printer on the same logical network as itself. On the other hand, you could use the Alexa app to discover the printer. This would require you to tap the “+” icon then select “Add Device”, then choose “Printer” as the device class to add. It will list any compatible printers on your home network so you can add them.

The Alexa app gives you fine-grained control so you can rename printers like the “Upstairs printer” or “Kitchen printer”; or allow you to delete or disable discovery of specific machines.

Amazon has, at the moment, partnered with particular publishers to offer printable items and has set up some basic printable items like ruled paper, arithmetic worksheets and the like to get you going. There is the ability to turn out crosswords including their answers along with recipes, which may be a rough-shot.

HP OfficeJet 6700 Premium business inkjet multifunction printer

.. or this HP OfficeJet 6700 desktop multifunction printer

It also ties in with the ability for you to use Alexa to buy first-party (genuine) ink or toner for your printer through their online storefront. Here, it will know which cartridges fit your machine, but the question is whether there will be the ability for you to specify standard-yield or high-yield consumables. That is because some manufacturers like HP and Brother offer their consumables in differing yield levels which may suit your needs or budget better.

At the moment, the number of printable resources will be limited until Amazon encourages Alexa Skills developers to build out Skills for this platform that support printing. Here, it could he things like asking for a rail timetable to be printed out or Amazon could even exploit Alexa Print to facilitate transactional printing like turning out tickets and boarding passes.

It will be interesting to see whether Google or Apple will bind the driver-free printing platforms that they own or partner with and their voice-driven assistant platforms to allow this kind of printing using them.

You can prevent mistaken voice-assistant behaviour from your smart speakers

Article

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

The mute button on your Echo or other smart speaker is important if you want greater control over the voice assistant

You Should Mute Your Smart Speaker’s Mic More Often | Lifehacker Australia

My Comments

An issue that will plague people who own smart speakers like the Amazon Echo or Google Home is the device interjecting with responses when you or someone else unintentionally say certain words.

This is because these devices are typically set up to listen all the time for a particular “wake word” that actually invokes the voice assistant. It is part of the machine-learning that drives these voice-assistant platforms to understand what you say.

But you can have some control over these smart speakers to avoid this behaviour. Each smart speaker or similar device will have a hardware mic-mute switch on them, highlighted with a microphone icon on that switch. This effectively turns the device’s microphones on or off as you need and the article recommended that if the device is too “hair-trigger”, you should enable this function unless you are actually intending to interact with the voice assistant. This procedure would be similar to how you would work a voice-driven personal assistant that is part of your smartphone’s operating system where you deliberately press a button on the device or a Bluetooth headset to invoke that assistant before you say the wake word.

Beware of the situation where the button will light up when you enable microphone muting on your smart speaker. Here this may be a point of confusion because some users may think that the device is “ready” to speak to when it is infact not able to take commands. You may have to be familiar with how your smart speaker looks when it is ready to accept commands, including any lights that are on.

If your voice-assistant platform has the function to “edit” what has been captured like what Amazon Alexa can do, this function can be used to fine-tune what the voice-assistant is meant to respond to. The same control app or Website can be used to manage your smart speaker’s microphone sensitivity or change the “wake command” that you say when you start interacting with the assistant.

They even recommend that you disconnect the smart-speaker device from the power if you don’t intend to use them. Privacy advocates even suggest doing that in areas where you value your privacy like your bedroom or bathroom.

But personally I would at least recommend that you are familiar with the hardware controls that exist on your smart speaker or similar device so you are able to have greater control over that platform.

The two-box voice-driven home assistant setup is being made real with Bluetooth

Article

Bang & Olufsen Beosound A1 Bluetooth smart speaker press image courtesy of Bang & Olufsen

Bang & Olufsen Beosound A1 2nd Generation Bluetooth smart speaker that works with a smartphone or similar devicce to benefit from Amazon Alexa

B&O Beosound A1 (2nd Gen) Announced With Alexa Integration | Ubergizmo

My Comments

At the moment, there is the latest generation of the Bose QuietComfort 35 noise-cancelling Bluetooth headset that implements a software link with the Google Assistant voice driven personal assistants through its own app. Now Bang & Olufsen have come up with the Beosound A1 Second Generation battery-operated Bluetooth speaker that has integration with the Amazon Alexa voice-driven home assistant platform.

But what are these about?

Bluetooth smart speaker diagram

How the likes of the B&O Beosound A1 work with your smartphone, tablet or computer to be a smart speaker

These are purely Bluetooth audio peripherals that connect to your smartphone which links with the Internet via Wi-Fi or mobile broadband. This is usually facilitated with a manufacturer-supplied app for that device that you install on your smartphone or tablet. You will also have to install the client software for the voice-driven assistant platform if your smartphone or tablet doesn’t have inherent support for that platform.

The Bose solution primarily used their app to “map” a secondary function button on the headset to activate Google Assistant. Then the B&O approach had the Beosound A1 and your smartphone or similar mobile-platform device work together as if it is an Amazon Echo.

Why do I see this as a significant trend for “smart-speaker” and allied device use cases, especially as Google, Amazon and the Voice Interoperability Initiative want to extend their voice-driven assistant platforms to setups based around Bluetooth audio peripherals. Here it underscores the reality that the highly-capable host devices will have Internet connectivity via a mobile-broadband connection or a local-area network.

One is to allow manufacturers to provide a highly-portable approach towards using Alexa or Google Assistant while on the move. Similarly, this approach will appeal to those in the automotive and marine infotainment sector with the idea of end-users bringing their own Internet connection with them while in their car or boat but wanting to use their preferred voice-driven assistant platform there.

Some technology manufacturers may look at the idea of a two-piece setup with a specially-designed Bluetooth speaker that links with a device that is normally connected to the Internet like a set-top box or router and both devices working in a smart-speaker capacity. Here, it can be about a cost-effective smart-speaker platform or to enable the use of battery-operated devices that use battery-efficient technologies.

After what Bose and B&O are doing, it could be about bringing the idea of a two-box smart-speaker setup for voice-driven assistant platforms opening up some interesting pathways.

Ultrasound being used as a way to measure user proximity to gadgets

Article – From the horse’s mouth

Google Nest

How ultrasound sensing makes Nest displays more accessible {The Keyword blog post)

My Comments

Google is implementing in their Nest Hub smart-display products an automatic display-optimisation technology that is based on technology that has been used for a very long time.

Ultrasonic technology has been used in various ways by nature and humans to measure distance. In nature, bats and dolphins which don’t have good vision use this approach to “see” their way. It is used extensively in military and civillian marine applications to see what is underneath a boat or around a submarine and is also used as a common medical-imaging technique.

As well, in the late 1970s, Polaroid implemented ultrasound as part of their active autofocus system, which ended up as a feature for their value-priced and premium instant-picture cameras. Here, this was used to measure the distance between the camera and the subject in order to adjust the lens for proper focus. There were limitations associated with the technology like not being able to work when you photograph through a window due to the ultrasonic waves not passing through the glass.

But Google has implemented this technology as a way to adjust the display on their Nest Hub smart displays for distant or close operation. The front of a Google Nest Hub has an ultrasonic sensor that works in a similar way to what was used in a Polaroid auto-focus instant-picture camera.

But rather than the Polaroid setup being about using the distance measurement from the ultrasonic sensor to adjust a camera’s lens, this application adjusts the display according to the user’s distance from the Nest Hub. If you are distant from the Nest Hub, you would see reduced information but the key details appear in a larger typeface. Then if you come closer to the Nest Hub, you would see more detail but at a smaller typeface.

Nest Hub Directions display press picture courtesy of Google

The Google blog article described this as being suitable for older users and those of us who have limited vision. The fact that you have the ability to see key information in a large typeface at a distance can make the Nest Hub accessible to this user group. But others can’t see deeper information unless they are very close to the device.

End-user privacy is still assured thanks to the use of a low-resolution distance-measurement technology whose results are kept within the device. As well, there is a menu option within the Google Home app’s Device Settings page to enable or disable the feature.

At the moment, it is initially being used for timer and current-time display as well as displaying travel time and traffic conditions for a planned journey that you set up with Google Maps. But Google and other software developers who develop for the Google Home ecosystem will add distance-sensitive display functionality to more applications like appointments and alerts.

Some people could see this technology not just for optimising the readout on a smart display but could even be used to ascertain whether people are actually using these devices. This could then be used for such functionality like energy-saving behaviour where the display turns off if no-one’s near it.

But what Google has to do is to license out this technology to allow others to implement it it to other fixed-display-based devices. Here, it could become of more use to many who don’t go for a Google Nest Hub.

but to add more functionality like appointments, alerts, reminders

Amazon starts Voice Interoperability Initiative for voice-driven assistant technology

Articles

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Devices like Amazon Echo could support multiple voice assistants

Amazon Creates A Huge Alliance To Demand Voice Assistant Compatibility | The Verge

Amazon launches Voice Interoperability Initiative — without Google, Apple or Samsung | ZDNet

Amazon enlists 30 companies to improve how voice assistants work together | Engadget

From the horse’s mouth

Amazon

Voice Interoperability Initiative (Product Page)

Amazon and Leading Technology Companies Announce the Voice Interoperability Initiative (Press Release)

My Comments

Amazon have instigated the Voice Interoperability Initiative which, at the moment, allows a hardware or software device to work with multiple compatible voice-driven AI assistants. It also includes the ability for someone to develop a voice-driven assistant platform that can serve a niche yet have it run on commonly-available smart-speaker hardware alongside a broad-based voice-driven assistant platform.

Freebox Delta press photo courtesy of Iliad (Free.fr)

Freebox Delta as an example of a European voice-driven home assistant that could support multiple voice assistant platforms

An example they called out was to run the Salesforce Einstein voice-driven assistant that works with Salesforce’s customer-relationship-management software on the Amazon Echo smart speaker alongside the Alexa voice assistant. Similarly, a person who lives in France and is taking advantage of the highly-competitive telecommunications and Internet landscape there by buying the Freebox Delta smart speaker / router and have it use Free.fr’s voice assistant platform or Amazon Alexa on that same device.

Microsoft, BMW, Free.fr, Baidu, Bose, Harman and Sony are behind this initiative while Google, Apple and Samsung are definitely absent. This is most likely because Google, Apple and Samsung have their own broad-based voice-driven assistant platforms that are part of their hardware or operating-system platforms with Apple placing more emphasis on vertically-integrating some of their products. It is although Samsung’s Android phones are set up to be able to work with their Bixby voice assistant or Google’s Assistant service.

Intel and Qualcomm are also behind this effort by offering silicon that provides the power to effectively understand the different wake words and direct a session’s focus towards a particular voice assistant.

The same hardware device or software gateway can recognise assistant-specific wake words and react to them on a session-specific basis. There will be the ability to assure customer privacy through measures like encrypted tunnelling for each assistant session along with an effort to be power-efficient which is important for battery-operated devices.

Personally I see this as an ability for companies to place emphasis on niche voice-assistant platforms like what Salesforce is doing with their Einstein product or Microsoft with its refocused Cortana product.  It can even make the concept of these voice assistants more relevant to the enterprise market and business customers.

Similarly, telcos and ISPs could create their own voice-driven assistants for use by their customers, typically with functionality that answers what they want out of the telco’s offerings. It can also extend to the hotel and allied sectors that wants to use voice-driven assistants for providing access to functions of benefit to hotel guests like room service, facility booking and knowledge about the local area. Let’s not forget vehicle builders who implement voice-driven assistants as part of their infotainment technology so that the drive has both hands on the wheel and eyes on the road.

This kind of offering can open up a market for the creation of “white-label” voice-assistant platforms that can be “branded” by their customers. As well, some of these assistants can be developed with a focus towards a local market’s needs like high proficiency in a local language and support for local values.

For hardware, the Amazon Voice Interoperability Initiative can open up paths for innovative devices. This can lead towards ideas like automotive applications, smart TVs, build-in use cases like intercom / entryphone or thermostat setups, software-only assistant gateways that work with computers or telephone systems amongst other things.

With the Amazon Voice Interoperability Alliance, there will be increased room for innovation in the voice-driven assistant sector.

You can manage your Google Home’s lighting and volume for night-time use

Article

You can use the Google Home app to manage how your Google Home smart speaker works during the evening and night hours for a better night’s sleep

Google Home Has A Hidden Night Mode (And We Love It) | Lifehacker

From the horse’s mouth

Google

Manage volume and LED brightness with Night mode (Instruction sheet)

My Comments

Some of you may want to interact with a Google Home smart speaker during the wee hours of the day such as to ask for the time or weather. Or you may touch the device and work it as a night-light. But it can be too bright or loud at these times in a way that people can be woken up at odd hours. Here, some users have to adjust the volume to avoid this risk of disturbance.

But Google has a “Night Mode” feature that allows you to determine the maximum volume and device lighting brightness during times when you don’t want to be disturbed.

Here, you have to use the Google Home mobile-platform app on your smartphone or tablet. As well, your mobile device has be on the same logical network as the Google Home device, which would typically be the same Wi-Fi network when you are thinking of your home network.

When you open the Google Home app, tap on the gear-shape Settings icon and you will see the “Night Mode” setting in the list of settings. There is a toggle switch to enable or disable this mode, and when you enable this mode, the LEDs on the top of your Google Home device will dim while the maximum speaker volume will be softer.

There is an option to schedule the times and days of the week when the Night Mode feature will be active. This may be of importance if you want to make sure it comes in to play on weeknights for example.

There are settings to determine the maximum speaker volume and lighting brightness that will apply to your Google Home smart speaker while it is in the “Night Mode” condition.

The Do Not Disturb option on the Night Mode settings page mutes any notification or system sounds that your Google Home smart speaker makes. But timer or alarm signals will “break through” this setting so you don’t miss that extra alarm you set to wake you up so you can pick up that loved one from the airport as they come off that late flight.

But I am not sure whether these settings can be configured for individual devices or all of the devices bound to the same account. This may be of importance if you want to reduce the volume and lighting brightness on units installed in the bedrooms while one or more units installed in the common living areas are maintained at bright levels; or you want to maintain a common setting across your home.

A feature that can improve on this would be to have the LEDs on top of a Google Home device stay alight but at the maximum brightness to allow you to use the device as a night light. This is more so for those of us who would keep one of these devices in the corridor near the main bathroom or within that bathroom to serve as a “beacon” night-light to enable use of the bathroom at night.

At least Google has provided an option to allow the Google Home device family to work properly without disturbing other people’s sleep at night.