Tag: voice-driven personal assistant

The BBC to develop its own voice assistant for finding AV content

Article

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

The BBC is working on a voice-driven assistant of its own

BBC planning Alexa rival | Advanced Television

BBC’s Beeb voice assistant goes beta | Advanced Television

BBC launches voice assistant that will learn regional accents | Mashable

From the horse’s mouth

British Broadcasting Corporation

What’s next for Beeb: The new voice assistant from the BBC (Press Release)

My Comments

The BBC have started work on the first voice-driven assistant platform that is designed in the UK and for British conditions. This platform is named Beeb which is one of the common affectionate nicknames for that public-service broadcaster.

It will be initially rolled out to UK participants of Microsoft’s Windows Insider program and is at beta stage, thus limiting it to use on regular computers at the moment. It is based on the BBC’s iPlayer content-directory effort and will be primarily about bringing up audio content like live and on-demand radio content, music or podcasts. It will also be able to bring forth local news and weather information.

UK Flag

… optimised for UK accents and likely to place the UK in the voice assistant sphere

There will even be the ability to respond from material written by BBC comedy writers when you ask for a joke, including the ability to ask for a joke from a particular show. This is thanks to the amount of intellectual property that they have built up over the years including all those legendary sitcoms and other comedy associated with British telly.

Beeb will be voiced with a male voice that has a northern English accent. As well, its initial setup will have the user determine what UK regional accent they have so it understands their accent better. This is part of an effort by the BBC to preserve local British accents in the face of other voice-driven assistants which force users to use standard British English pronunciation over regional accents.

There is an intent to roll it out to other devices which are software-programmable and it will also to be part of revisions to the iPlayer app and BBC Website. But personally, I see the BBC’s Beeb effort as a candidate for the Voice Interoperability Initiative driven by Amazon and Microsoft that allows a device to run with multiple voice assistants and respond to each assistant’s “wake word”. That is an activity that the BBC are infact a participant in.

Support for regional and local accents

But personally, I see the Beeb voice assistant as opening a path for UK companies to develop their own voice-driven assistants that respect UK language, dialects, accents and culture. It can also be an example of fine-tuning a voice-driven assistant platform to work with the various local accents and dialects used within a country or language and avoid steering the user towards what is seen as a standardised pronunciation for that language.

What will eventually need to happen would be to allow automatic detection of a user’s accent and to work with that accent automatically when they speak, rather than requiring a user to determine which accent they are using during setup. Having to determine which accent you are using during the setup phase can be a problem for households with members that come from different regions.

I would also see efforts like Beeb end up being about local-speech “modules” for voice-driven assistant platforms that enable these platforms to support a country’s or language’s local peculiarities when it comes to regular use. It will then avoid the need for people to resort to using “standard” diction rather than the accent they are comfortable with to deal with voice assistants. Similarly, it could be about different “voicings” that maintain local characteristics for the assistant’s speech.

Who knows what this could mean for making voice-driven assistants locally relevant?

The two-box voice-driven home assistant setup is being made real with Bluetooth

Article

Bang & Olufsen Beosound A1 Bluetooth smart speaker press image courtesy of Bang & Olufsen

Bang & Olufsen Beosound A1 2nd Generation Bluetooth smart speaker that works with a smartphone or similar devicce to benefit from Amazon Alexa

B&O Beosound A1 (2nd Gen) Announced With Alexa Integration | Ubergizmo

My Comments

At the moment, there is the latest generation of the Bose QuietComfort 35 noise-cancelling Bluetooth headset that implements a software link with the Google Assistant voice driven personal assistants through its own app. Now Bang & Olufsen have come up with the Beosound A1 Second Generation battery-operated Bluetooth speaker that has integration with the Amazon Alexa voice-driven home assistant platform.

But what are these about?

Bluetooth smart speaker diagram

How the likes of the B&O Beosound A1 work with your smartphone, tablet or computer to be a smart speaker

These are purely Bluetooth audio peripherals that connect to your smartphone which links with the Internet via Wi-Fi or mobile broadband. This is usually facilitated with a manufacturer-supplied app for that device that you install on your smartphone or tablet. You will also have to install the client software for the voice-driven assistant platform if your smartphone or tablet doesn’t have inherent support for that platform.

The Bose solution primarily used their app to “map” a secondary function button on the headset to activate Google Assistant. Then the B&O approach had the Beosound A1 and your smartphone or similar mobile-platform device work together as if it is an Amazon Echo.

Why do I see this as a significant trend for “smart-speaker” and allied device use cases, especially as Google, Amazon and the Voice Interoperability Initiative want to extend their voice-driven assistant platforms to setups based around Bluetooth audio peripherals. Here it underscores the reality that the highly-capable host devices will have Internet connectivity via a mobile-broadband connection or a local-area network.

One is to allow manufacturers to provide a highly-portable approach towards using Alexa or Google Assistant while on the move. Similarly, this approach will appeal to those in the automotive and marine infotainment sector with the idea of end-users bringing their own Internet connection with them while in their car or boat but wanting to use their preferred voice-driven assistant platform there.

Some technology manufacturers may look at the idea of a two-piece setup with a specially-designed Bluetooth speaker that links with a device that is normally connected to the Internet like a set-top box or router and both devices working in a smart-speaker capacity. Here, it can be about a cost-effective smart-speaker platform or to enable the use of battery-operated devices that use battery-efficient technologies.

After what Bose and B&O are doing, it could be about bringing the idea of a two-box smart-speaker setup for voice-driven assistant platforms opening up some interesting pathways.

Amazon starts Voice Interoperability Initiative for voice-driven assistant technology

Articles

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Devices like Amazon Echo could support multiple voice assistants

Amazon Creates A Huge Alliance To Demand Voice Assistant Compatibility | The Verge

Amazon launches Voice Interoperability Initiative — without Google, Apple or Samsung | ZDNet

Amazon enlists 30 companies to improve how voice assistants work together | Engadget

From the horse’s mouth

Amazon

Voice Interoperability Initiative (Product Page)

Amazon and Leading Technology Companies Announce the Voice Interoperability Initiative (Press Release)

My Comments

Amazon have instigated the Voice Interoperability Initiative which, at the moment, allows a hardware or software device to work with multiple compatible voice-driven AI assistants. It also includes the ability for someone to develop a voice-driven assistant platform that can serve a niche yet have it run on commonly-available smart-speaker hardware alongside a broad-based voice-driven assistant platform.

Freebox Delta press photo courtesy of Iliad (Free.fr)

Freebox Delta as an example of a European voice-driven home assistant that could support multiple voice assistant platforms

An example they called out was to run the Salesforce Einstein voice-driven assistant that works with Salesforce’s customer-relationship-management software on the Amazon Echo smart speaker alongside the Alexa voice assistant. Similarly, a person who lives in France and is taking advantage of the highly-competitive telecommunications and Internet landscape there by buying the Freebox Delta smart speaker / router and have it use Free.fr’s voice assistant platform or Amazon Alexa on that same device.

Microsoft, BMW, Free.fr, Baidu, Bose, Harman and Sony are behind this initiative while Google, Apple and Samsung are definitely absent. This is most likely because Google, Apple and Samsung have their own broad-based voice-driven assistant platforms that are part of their hardware or operating-system platforms with Apple placing more emphasis on vertically-integrating some of their products. It is although Samsung’s Android phones are set up to be able to work with their Bixby voice assistant or Google’s Assistant service.

Intel and Qualcomm are also behind this effort by offering silicon that provides the power to effectively understand the different wake words and direct a session’s focus towards a particular voice assistant.

The same hardware device or software gateway can recognise assistant-specific wake words and react to them on a session-specific basis. There will be the ability to assure customer privacy through measures like encrypted tunnelling for each assistant session along with an effort to be power-efficient which is important for battery-operated devices.

Personally I see this as an ability for companies to place emphasis on niche voice-assistant platforms like what Salesforce is doing with their Einstein product or Microsoft with its refocused Cortana product.  It can even make the concept of these voice assistants more relevant to the enterprise market and business customers.

Similarly, telcos and ISPs could create their own voice-driven assistants for use by their customers, typically with functionality that answers what they want out of the telco’s offerings. It can also extend to the hotel and allied sectors that wants to use voice-driven assistants for providing access to functions of benefit to hotel guests like room service, facility booking and knowledge about the local area. Let’s not forget vehicle builders who implement voice-driven assistants as part of their infotainment technology so that the drive has both hands on the wheel and eyes on the road.

This kind of offering can open up a market for the creation of “white-label” voice-assistant platforms that can be “branded” by their customers. As well, some of these assistants can be developed with a focus towards a local market’s needs like high proficiency in a local language and support for local values.

For hardware, the Amazon Voice Interoperability Initiative can open up paths for innovative devices. This can lead towards ideas like automotive applications, smart TVs, build-in use cases like intercom / entryphone or thermostat setups, software-only assistant gateways that work with computers or telephone systems amongst other things.

With the Amazon Voice Interoperability Alliance, there will be increased room for innovation in the voice-driven assistant sector.

Celebrity voices to become a new option for voice assistants

Article

How to Make John Legend Your Google Assistant Voice | Tom’s Guide

Google Assistant launches first celebrity cameo with John Legend | CNet

How to make John Legend sing to you as your new Google Assistant voice | CNet

From the horse’s mouth

Google

Hey Google, talk like a Legend {Blog Post)

Video – Click or tap to play

My Comments

Google is trying out a product-differentiating idea of using celebrity voices as an optional voice that answers you when you use their Google Assistant.

This practice of using celebrity voices as part of consumer electronics and communications devices dates back to the era of telephone answering machines. Here, people could buy “phone funnies” or “ape tapes” which featured one-liners or funny messages typically recorded by famous voices such as some of radio’s and TV’s household names. It was replaced through the 90s with downloadable quotes that you can use for your computer’s audio prompts or, eventually, for your mobile phone’s ringtone.

Now Google has worked on the idea of creating what I would call a “voice font” which uses a particular voice to annunciate text provided in a text-to-speech context. This is equivalent to the use of a typeface to determine how printed text looks. It also encompasses the use of pre-recorded responses that are used for certain questions, typically underscoring the particular voice’s character.

The technology Google is using is called WaveNet which implements the neural-network and machine-learning concept to synthesise the various voices in a highly-accurate way. But to acquire the framework that describes a particular voice, the actor would have to record predefined scripts which bring out the nuances in their voices. It is part of an effort to provide a natural-sounding voice-driven user experience for applications where the speech output is varied programmatically such as voice-driven assistants or interactive voice response.

At the moment, this approach can only happen with actors who are alive and can come in to a studio. But I would see WaveNet and similar technologies eventually set up to work from extant recordings where the actor isn’t working to a special script used for capturing their voice’s attributes, including where the talent’s voice competes with other sounds like background music or sound effects . By working from these recordings, it could be about using the voices of evergreen talent that had passed on or using the voices that the talent used while performing in particular roles that underscored their fame. A good example of this application are the actors who performed in those classic British TV sitcoms of the 1970s or using Peter Sellers’, Spike Milligan’s, Harry Secombe’s and Michael Bentine’s voices as they sounded in the Goon Show radio comedy.

Google is presenting this in the form of a special-issue “voice font” representing John Legend, an actor and singer-songwriter who sung alongside the likes of Alicia Keys and Janet Jackson. Here, it is being used as a voice that one can implement on their Google Home, Android phone or other Google-Assistant device, responding to particular questions you ask of that assistant.

Amazon and others won’t take this lying down especially where the voice-driven assistant market is very competitive. As well, there will be the market pressure for third parties to implement this kind of technology in their voice-driven applications such as navigation systems in order to improve and customise the user experience.

BMW to use the car as a base for a European voice-driven assistant platform

Article

BMW Intelligent Personal Assistant may be the cold, distant German Siri of our dreams | CNet

From the horse’s mouth

BMW Group

Video – Click or tap to play

My Comments

I have been pushing for the idea of European firms answering what Silicon Valley offers but applying European values to these offerings. Here, it’s like the rise of Airbus and Arianespace from France answering the USA’s leadership in the aerospace industry.

I was calling this out because the European Commission were always worried about the way the popular Silicon-Valley-based online services, especially Google, Amazon, Facebook and Apple were doing to European personal and business values like democracy, competitive markets, user privacy and transparency. Their typical answer was to either pass more regulations or litigating against them in the European court system. But they could easily encourage European companies to offer online services that underscore the European mindset through, for example, business-development assistance. 

It is something that is slowly happening with the rise of Spotify, the leading world-wide jukebox, rising from Sweden. There is also a persistent effort within France to answer YouTube with a peer-to-peer video-streaming service.

Now BMW have stepped up to the plate by working on a voice-driven assistant which will initially be focused towards the automotive space. But they intend to take it beyond the vehicle and have it as a European competitor to Alexa, Siri, Google Assistant or Cortana.

But I would say that even if they don’t get it beyond the car dashboard, they could establish it as a white-label platform for other European tech firms to build upon. This could lead to the creation of smart-speaker products from the likes of Bang & Olufsen or TechniSat that don’t necessarily have to run a Silicon-Valley voice-driven assistant platform. Or Bosch or Electrolux could work on a “smart-home” control setup with a voice-driven assistant that is developed in Europe.

Google Assistant has the ability to break the bad news cycle

Article

Google’s Assistant is here to give you a break from the horrible news cycle | FastCompany

From the horse’s mouth

Google

Hey Google, tell me something good (Blog Post)

Video – Click or tap to play

My Comments

The news cycle that you hear in the USA has been primarily focused on bad news especially with what President Trump is up to or some natural disaster somewhere around the world. It is a very similar issue that is happening around the world. A common issue that is drawn out regarding this oversaturation of bad news is that it can bring about fear, uncertainty and doubt regarding our lives with some entities taking advantage of it to effectively manipulate us.

Some of us follow particular blogs or Facebook pages that contain curated examples of good news that can break this monotony and see solutions for the highlighted problems. But Google is extending this to a function they are building in to the Google Assistant platform with stories that are curated by people rather than machines and, in a lot of cases, derived from a variety of media sources. But this is facilitated by the Solutions Journalism Network non-profit which is more about solution-focused media coverage.

Of course, there will be the doubters and skeptics who will think that we aren’t facing reality and are dwelling in the “Hippie Days” and the “Jesus People” era of the 1960s and early 1970s. But being able to come across positive solutions for the various problems being put forward, including people working “outside the box” to solve that problem can inspire us.

This is a feature is offering on an experimental basis through the USA only and can be used on your Google Home or other Google-Assistant devices. But as this application is assessed further, it could be easily made available across more countries.

You can find out what Cortana has recorded

Article

Harman Invoke Cortana-driven smart speaker press picture courtesy of Harman International

You can also manage your interactions with the Harman-Kardon Invoke speaker here

How to delete your voice data collected by Microsoft when using Cortana on Windows 10 | Windows Central

My Comments

Previously, I posted an article about managing what Amazon Alexa has recorded when you use an Amazon Echo or similar Alexa-compatible device.

Now Microsoft has a similar option for Cortana when you use it with Windows 10. This is also important if you use the Harman-Kardon Invoke smart speaker, the Johnson Controls GLAS smart thermostat as long as they are bound to your Microsoft account.

Windows 10 Settings - Accounts - Manage My Microsoft Account

Manage your Microsoft Account (and Cortana) from Windows 10 Settings

In most instances such as your computer, Cortana may be activated by you clicking on an icon on the Taskbar or pressing a button on a suitably-equipped laptop, keyboard or other peripheral to have her ready to listen. But you may set her up to hear the “Hey Cortana” wake word to listen to you. This may be something that a Cortana-based smart device may require of you for expected functionality when you set it up.

This may be a chance where Cortana may cause problems with picking up unwanted interactions. But you can edit what Cortana has recorded through your interactions with her.

Here, you go in to Settings, then click on Accounts to open the Accounts screen. Click on Your Info to which will show some basic information about the Microsoft Account associated with your computer.

Privacy dashboard on your Microsoft Account management Website

Privacy dashboard on your Microsoft Account management Website

Click on “Manage My Microsoft Account” which will open a Web session in your default browser to manage your Microsoft Account. Or you could go directly to https://account.microsoft.com without needing to go via the Settings menu on your computer. The direct-access method can be important if you have to use another computer like a Mac or Linux box or don’t want to go via the Settings option on your Windows 10 computer.

Microsoft Account Privacy Dashboard - Cortana Interactions highlighted

Click here for your Cortana Voice interaction history

You will be prompted to sign in to your Microsoft Account using your Microsoft Account credentials. Click on the “Privacy” option to manage your privacy settings. Then click on the “Activity History” option and select “Voice” to view your voice interactions with Cortana. Here, you can replay each voice interaction to assess whether they should be deleted. You can delete each interaction one by one by clicking the “Delete” option for that interaction or clear them all by clicking the “Clear activity” option.

Details of your voice interactions with Cortana

Details of your voice interactions with Cortana

Your management of what Cortana has recorded takes place at the Microsoft servers in the same vein to what happens with Alexa. But there will be the disadvantage of Cortana not having access to the false starts in order to use her machine learning to understand your voice better.

These instructions would be useful if you are dealing with a Cortana-powered device that doesn’t use a “push-to-talk” or “microphone-mute” button where you can control when she listens to you.

You can find out what Alexa has recorded

Article

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

You can find out what Amazon Alexa has recorded through your Echo device

How To Find Out What Your Alexa Is Recording | Lifehacker

My Comments

Recently, the computer press went in to overdrive about an Amazon Echo setup that unintentionally recorded and forwarded a family’s private conversation and forwarded it to someone in Seattle. Here, the big question that was asked was what was your Amazon Echo or similar smart speaker device recording without you knowing.

Amazon Echo, Google Home and similar voice-driven home-assistant platforms require a smart speaker that is part of the platform to hear for a “wake word” which is a keyword that wakes up these devices and has them listening. Then these devices capture and interpret what you say after that “wake word” in order to perform their function. One of the functions that these devices may perform is audio messaging where they could record a user’s message and pass that message on to another user on the same platform.

I had previously covered the issue of these voice-driven assistants being at risk of nuisance triggering including mentioning about the XBox game console supporting a voice assistant that triggered when an adman on a TV commercial called out a spot-special for the games console by saying “XBox On Sale” or “XBox On Special”.

Here, I recommended the use of a manual “call button” to make these devices ready to listen when you are ready or a “microphone mute” toggle to prevent your device being falsely triggered. As well, I recommended a visual indicator on the device that signals when it is listening. This is a practice mainly done with voice-assistant functionality that is part of a video peripheral’s feature set or software that runs on a platform computing device. Google’s Home smart speaker instead uses the microphone-mute button to allow you to control its microphone.

But you can check what Alexa has been recording from your Amazon Echo or other Alexa-compatible speaker device and delete private material that she shouldn’t have captured. This is also useful if you are troubleshooting one of these devices, identifying misunderstood instructions or are developing an Alexa Skill for the Alexa ecosystem.

  1. Here you launch the Amazon Alexa mobile-platform app on your smartphone. If you are using the Amazon Alexa Website (http://alexa.amazon.com) as previously mentioned on this site, there is a similar procedure to go about identifying your Amazon Echo sessions.
  2. Then you tap on the hamburger-shaped “advanced operation” icon on the top left of your screen.
  3. Tap on Settings to bring up a Settings menu for your setup. Go to the History option in the Alexa Account section of that menu.
  4. Here, you will see a list of interactions with any Alexa-ecosystem hardware or software front-end related to your Amazon account. These will be categorised by what has been understood and what hasn’t been understood. There is an option to filter the interaction list by date, which is handy if you have made heavy use of your Amazon Echo device through the months and years.

You can play each interaction to be sure of what your Alexa device or software has recorded. With these interactions, the current version of the interface only allows you to delete each unwanted interaction on by one. The effect of the deletion is that the interaction, including the voice recording, disappears from your account and the Amazon servers. But this could degrade your Amazon Alexa experience due to it not having much data to work on for its machine-learning abilities.

Here, at least with the Amazon Alexa ecosystem, you have some control over what has been recorded so you can remove potentially-private conversations from that ecosystem.

Voice-driven assistants at risk of nuisance triggering

Article

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

A problem that showed up with the Amazon Echo’s always-listen behaviour was nuisance triggering for the laughter command

Voice control is no laughing matter | Videonet

My Comments

An issue that has been raised recently is the risk of a voice-driven assistant like Apple’s Siri, Amazon’s Alexa, Google’s Assistant or Microsoft’s Cortana being triggered inadvertently and becoming of nuisance value.

This was discovered with Amazon’s Echo devices where you could say “Alexa, laugh” and Alexa would laugh in response. But if this was said in conversation or through audio or video content you had playing in the background, this could come across very creepy. A similar situation was discovered in 2014 with Microsoft’s XBox when there was a voice-search functionality built in to in and you would wake it by saying “XBox on!”, This was aggravated if, for example, a TV commercial from a consumer electronics outlet was playing and the adman announced a special deal on one of these consoles by saying something like “XBox On Special” or “XBox On Saie” which contain this key phrase.

Similarly, we are starting to see “voice-driven search” become a part of consumer electronics and this could become of an annoyance whenever dialogue in a movie or TV show or an adman’s talking in a TV commercial could instigate a search routine during your TV viewing.

But there are some implementations of these voice assistants that don’t start automatically when they hear your “wake phrase” associated with them like “Alexa” or “Hi Siri”. In these cases, you would press a “call” button to make the device ready to listen to you. This typically happens with smartphones, tablets, computers or smart-TV remote controls.

On the other hand, some of the smart speakers like Google Home use a microphone-mute button which you would activate if there is a risk of nuisance triggering. In this mode, the device’s microphone isn’t active until you manually disable it.

Google Home uses a microphone-mute button to control the mic

Personally, I would still like to see some form of manual control offered as the norm for these devices, preferably in the form of a “call” button with a distinct tactile feel when pressed. Then you would see a different light glow or other visual cue when the device is ready to talk to. Here, the user has some form of control over when the device can listen to them thus assuring their privacy.

Here, the article underscored the role of speech as part of a user interface that integrated one of many different interaction types like touch or vision. This then provides different comfort zones that the user can benefit from when using the device and they then rely on what’s comfortable to them.

Politics creeps in to the world of the voice-driven assistant

Articles

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

Is Amazon Alexa and similar voice-driven assistants becoming a new point of political influence in our lives?

Amazon’s Alexa under fire for voicing gender and racial views | The Times via The Australian

Alexa, are you a liberal? Users accuse Amazon’s smart assistant of having a political bias after she reveals she is a feminist who supports Black Lives Matter | Daily Mail

Amazon’s Alexa is a feminist and supports Black Lives Matter | Salon

My Comments

An issue that has started to come on board lately is how Amazon Alexa, Google Assistant, Apple Siri and Microsoft Cortana respond to highly-polarising political questions especially in context to hot-button topics.

This talking point has come up just lately in the USA which has over the last year become highly polarised. It has been driven by the rise of the alt-right who have been using social media to spread their vitriol, the fake news scandals, along with Donald Trump’s rise to the White House. Even people from other countries who meet up with Americans or have dealings with any organisation that has strong American bloodlines may experience this.

Could this even apply to Apple’s Siri assistant or Google Assistant that you have in your smartphone?

What had been discovered was that Amazon’s Alexa voice-driven assistant was being programmed to give progressive-tinted answers to issues seen to be controversial in the USA like feminism, Black Lives Matter, LGBTI rights, etc. This was causing various levels of angst amongst the alt-right who were worried about the Silicon-Valley / West-Coast influence on the social media and tech-based information resources.

But this has not played off with the UK’s hot-button topics with Alexa taking a neutral stance on questions regarding Brexit, Jeremy Corbyn, Theresa May and similar lssues. She was even challenged about what a “Corbynista” (someone who defends Jeremy Corbyn and his policies) is. This is due to not enough talent being available in the UK or Europe to program Alexa to achieve answers  to UK hot topics in a manner that pleases Silicon Valley.

The key issue here is that voice-driven assistants can be and are being programmed to answer politically-testing questions in a hyper-polarised manner. How can this be done?

Could it also apply to Cortana on your Windows 10 computer?

The baseline approach, taken by Apple, Google and Microsoft, can be to give the assistant access to these resources that matches the software company’s or industry’s politics. This can be pointing to a full-tier or meta-tier search engine that ranks favourably resources aligned to the desired beliefs. It can also be about pointing also to non-search-engine resources like media sites that run news with that preferred slant.

The advanced approach would be for a company with enough programming staff and knowledge on board could programmatically control that assistant to give particular responses in response to particular questions. This could be to create responses worded in a way to effectively “preach” the desired agenda to the user. This method is infact how Amazon is training Alexa to respond to those topics that are seen as hot-button issues in the USA.

Government regulators in various jurisdictions may start to raise questions regarding how Alexa and co are programmed and their influence on society. This is with a view to seeing search engines, social media, voice-driven assistants and the like as media companies similar to newspaper publishers or radio / TV broadcasters and other traditional media outlets, with a similar kind of regulatory oversight. It is more so where a voice-driven assistant is baked in to hardware like a smart speaker or software like an operating system to work as the only option available to users for this purpose, or one or more of these voice-driven assistants benefits from market dominance.

At the moment, there is nothing you can really do about this issue except to be aware of it and see it as something that can happen when a company or a cartel of companies who have clout in the consumer IT industry are given the power to influence society.