Category: Current and Future Trends

How do I see the state of play with network-based multiroom audio?

Definitive Technologies W-Series multiroom soundbar – an example of one of these network multiroom speakers

Increasingly everyone in the consumer audio-visual industry are releasing multiroom audio platforms that work across a small network to share audio content through your house.

This typically is used as a way for these vendors to “bind” most of their network-capable audio-video products having them serve as an endpoint for music around the house. For some manufacturers, this functionality is seen as a way to differentiate their consumer-electronics product ranges.

Key functions offered by most network-based multiroom audio platforms

Each unit in a network-based multiroom audio platform can be one of many AV device classes. These cam be: a speaker system that plays out the audio content; an adaptor device that plays the audio content through another sound system that has its own amplification and speakers; or a network-capable amplifier that connects to a set of speakers.

The adaptor devices are often promoted as a way to bring an existing hi-fi in to the context of a multiroom audio setup, but you could use computer speakers or a 1980s-era boombox for the same effect. Similarly, network-capable amplifiers may be seen as a way to get existing speakers as part of a multiroom audio setup.

There are different variations on the theme with soundbars that are connected to a TV, or receivers and stereo systems that are capable of acting in their own right as a sound system but can be part of these multiroom setups, or subwoofers that connect to the home network but exist to add some “kick” to the sound played by other speakers in the setup.

These work on the premise of the speakers existing on the same logical network of a “home / small-business” network setup. That is where

  • the network is connected to one router that typically gives it access to Internet service,
  • Wi-Fi wireless segments are set up according to the WPA-Personal (shared passphrase) arrangement
  • members of a network are not isolated and can easily discover each other
  • and you are not using a Web-based login page to use the network.

This Def Tech device is an “on-ramp” digital media adaptor for a network-based multiroom audio setup

The speakers can be set up as members of a logical group that typically represents a room, with the ability to have multiple logical speaker groups on the same logical network. Under normal operation, all speakers of that group play the same audio stream synchronously. As well, the hardware and software works together to avoid jitter and other problems associated with moving synchronous time-dependent audio content across packet-based networks.

Some platforms allow the creation of a multichannel group where a speaker or speakers play a channel of a stereo or multichannel soundmix. Here, you could have one speaker play the left channel of a stereo soundmix while another speaker plays the right channel of that stereo mix. This has led to the creation of surround-sound setups with a soundbar or surround-capable stereo receiver playing the front channels of a surround soundmix while wireless speakers look after the surround channels and low-frequency effects of that mix.

Let’s not forget that some systems have the ability to use certain speakers to handle particular frequency ranges of the audio stream. The obvious case is to bring in a wireless subwoofer to provide that bit of extra bass punch to the music. But it could be to use full-range speaker systems with improved bass response to complement speakers that don’t have that kind of bass response. In this case, the full-range speaker may be allow frequency-level adjustability and you could set things up so that it puts more of its power behind the bass while the other speakers provide stronger localised treble response.

Yamaha R-N402 Natural Sound Network Stereo Receiver press picture courtesy of Yamaha Australia

Yamaha R-N402 Network Stereo Receiver – a MusicCast-based example of a stereo component that cam stream its own sources to a network multiroom system or play content from an online or multiroom source

You can adjust the sound levels for each output device individually or adjust them all as a group, The individual approach can appeal to “party-mode” arrangements where different speakers are in different rooms and is of benefit where you can adjust the sound level on the device itself; but the group approach comes in handy with multiple speakers in one room such as a multichannel setup.

All of these setups use a mobile-platform app supplied by the platform vendor as the control surface. But some of them allow some form of elementary control like programme selection or sound-level adjustment through controls on the device or its remote control. Let’s not forget that an increasing number of these platforms are being supported by interfaces for one or more voice-driven home assistants so you can tell Amazon Alexa to adjust the volume or play a particular source through the system.

Most of these platforms allow a device to have integrated programme sources or input connections for external equipment and stream what’s playing through these sources or inputs through one or more other speakers. The applications put forward include to play the TV sound from a connected TV in the living room through a speaker in the kitchen or to have the music on a CD playing on the stereo system’s CD player coming through a speaker in the bedroom.

A party context for this feature could include connecting an audio adaptor with a line-level input to the DJ’s mixer output in parallel with his PA amplifier and speakers serving the dance-floor area. Then you “extend” the party sound that the DJ creates in to the other rooms using other wireless speakers / audio adaptors based on that same platform with each output device working at a level appropriate to the area each speaker or adaptor-equipped sound system is used in. Here, the multiroom audio setup can make it easy to provide “right-sized” amplification for other areas at the venue.

Denon HEOS wireless speakers

The Denon HEOS multiroom speakers – a typical example of network-based multiroom devices

Increasingly, most of these platforms are being geared towards taking advantage of your home network to reproduce master-grade audio content recorded at the different speakers. Initially this was to cater towards file-based audio content sourced from online “download-to-own” music storefronts who cater to audiophiles but is catering towards high-quality streaming-music services. It also is a way to stream audio content from analogue sources such as your vinyl record collection across your home network without losing sound quality in the process.

The current limitations with these systems

The multiroom-audio platforms are created by the audio-equipment manufacturers or, in some cases, the companies who are behind the hardware chipsets used in these devices. Only one platform, namely DTS PlayFi, is created by a company who isn’t developing particular chipsets or equipment.

Here, this leads effectively to balkanisation of the network-based multiroom audio marketplace where you have to be sure all your equipment is part of one platform for it to work correctly. You may be able to work around this problem through connecting one unit from one platform to another unit belonging to another platform using a line-level, digital or Bluetooth connection, then juggling between two different mobile-platform apps to control the system.

What needs to happen?

As this product function evolves, there needs to be room to improve.

Firstly, there needs to be the ability for one to establish a network-based multiroom setup using devices based on different platforms. This would require creating and maintaining industry-wide standards and specifications under an umbrella “multiroom AV platform” that all the manufacturers can implement, in a similar way to HDMI-CEC equipment control via HDMI. The Wi-Fi Alliance have taken steps towards this by developing Wi-Fi TimeSync as a standards-based approach towards achieving audio synchronisation across Wi-Fi-based devices. Qualcomm is wanting to push the AllPlay

It would also be about identifying and creating multichannel audio setups that can work appropriately. In the case of a stereo setup, this would require the speakers to have the same output level and frequency response for a proper stereo pair. A surround setup would work with speakers that are part of a “pair” in the Front, Surround or Back (7.1 setups) having the same output level and frequency response. To the same extent, it could be about adding a subwoofer to speakers that can only handle the middle and higher frequencies.

Manufacturers also have to underscore whether these systems can work across any network segment types present in a home network including handling networks that are comprised of multiple segments. This can cater to wireless networks implementing either an Ethernet or HomePlug wired backbone, or one of the newer distributed-Wi-Fi networks. A few multiroom audio platforms have achieved this goal through the supply of equipment, typically stereo systems and adaptor devices, that uses Ethernet connectivity as well as Wi-Fi connectivity.

There is also the issue of allowing for network-based multiroom audio setups to have a high number of endpoint devices even on a typical home network. Here it is about how much can be handled across the typical network’s bandwidth especially if the network and devices implement up-to-date high-bandwidth technology.

This is important if one considers implementing one or more multichannel groups or use wireless subwoofers in every group for that bit of extra bass. It also is important where someone may want to run two or more logical groups at once with each logical group running the same or a different local or online content source.

Some manufacturers may determine device limits based on the number of logical groups that can be created. But I would still like to do away with placing an artificial ceiling on how large one can have their multiroom audio setup, with the only limit being the effective bandwidth available to the home network.

Conclusion

The network-based multiroom audio technology is showing some signs of maturity but a lot more effort needs to take place to assure a level playing field for consumers who want to implement such setups.

Dell takes a leaf out of Detroit’s book with their budget gaming laptops

Articles

Dell G7 15 gaming laptop press picture courtesy of Dell USA

Dell G Series laptops – to be the “pony cars” of the gaming laptop scene

Dell’s new G series laptops pair gaming specs with a cheap plastic chassis | The Verge

Dell rebrands Inspiron gaming laptops to G Series, serves up four new models | Digital Trends

Dell’s G Series laptops are priced for every gamer | PC World

Dell’s Renamed Low-Cost Gaming Laptops are Thinner and Faster Than Before | Gizmodo

From the horse’s mouth

Dell

Product Page

Press Release

My Comments

Ford Mustang fastback at car show

Dell used the same approach as Ford did in the 1960s with the original Mustang

During the heyday of the “good cars” that was represented through the 1960s and 1970s, the major vehicle builders worked on various ways to approach younger drivers who were after something that was special.

One of these was to offer a “pony car” which was a specifically-designed sporty-styled two-door car that had a wide range of power, trim and other options yet had a base model that was affordable to this class of buyer. Another was to place in to the product lineup for a standard family-car model a two-door coupe and / or a “sports sedan” / “sports saloon” that is a derivative of that standard family car and built on that same chassis but known under an exciting name with examples being the Holden Monaro or the Plymouth Duster. This would be available as something that young people could want to have when they are after something impressive.

Both these approaches were made feasible through the use of commonly-produced parts rather than special parts for most of the variants or option classes. As well, there was the option for vehicle builders to run with variants that are a bit more special such as racing-homologation specials as well as providing “up-sell” options for customers to vary their cars with.

The various laptop computer manufacturers are trying to work on a product class that can emulate what was achieved with these cars. Here, it is to achieve a range of affordable high-performance computers that can appeal to young buyers who want to play the latest “enthusiast-grade” games on.

Dell Inspiron 15 Gaming laptop

The Dell Inspiron 15 7000 Gaming laptop – to be superseded by the Dell G Series

One of the steps that has taken place was to offer a high-performance “gaming-grade” variant of a standard laptop model like the Dell Inspiron 15 Gaming laptop, one of which I had reviewed. This approach is similar to offering the “Sport” or “GT” variant of a common family-car model, where the vehicle is equipped with a performance-tuned powertrain like the Ford Falcon GT cars.

But Dell have come very closer to the mark associated with either the “pony cars” or the sporty-styled vehicles derived from the standard family-car model with the release of the Inspiron G series of affordable gamer-grade laptops. Here, they released the G3, G5 and G7 models with baseline models being equipped with traditional hard disks and small RAM amounts. But these were built on a very similar construction to the affordable mainstream laptops.

These models are intended to replace the Inspiron 15 Gaming series of performance laptops and it shows that they want to cater to the young gamers who may not afford the high-end gaming-focused models. As well, the G Series name tag is intended to replace the Inspiron nametag due to its association with Dell’s mainstream consumer laptop products which takes the “thunder” out of owning a special product. This is similar to the situation I called out earlier with sporty vehicles that are derivatives of family-car models having their own nameplate.

The G3, which is considered the entry-level model, comes with a 15” or a 17” Full-HD screen and is available in a black or blue finish with the 15” model also available in white. It also has a standard USB-C connection with Thunderbolt 3 as an extra-cost “upsell” option along with Bluetooth 5 connectivity. This computer is the thinnest of the series but doesn’t have as much ventilation as the others.

The G5 which is the step-up model, is a thicker unit with rear-facing ventilation and is finished in black or red. This, like the G7 is equipped with Thunderbolt 3 for an external graphics module along with Bluetooth 4 and has the ability for one to buy a fingerprint scanner as an option. Also it comes only with a 15” screen available in 4K or Full HD resolution.

The G7 is the top-shelf model totally optimised for performance. This is a thicker unit with increased ventilation and implements high-clocked CPU and RAM that is tuned for performance. It has similar connectivity to the G5 along with similar display technology and is the only computer in the lineup to implement the highly-powerful Intel Core i9 CPU that was launched as the high-performance laptop CPU as part of the latest Coffee Lake lineup.

All the computers will be implementing the latest Coffee Lake lineup of Intel high-performance Core CPUs, being the Core i5-8300HQ or Core i7-8750H processors depending on the specification. In the case of the high-performance G7, the Intel Core i9-8950HQ CPU will be offered as an option for high performance.

They all use standalone NVIDIA graphics processors to paint the picture on the display with a choice between the GeForce GTX1060 with Max-Q, the GeForce GTX1050Ti or the GeForce GTX1050. What is interesting about the GeForce GTX1060 with Max-Q is that it is designed to run with reduced power consumption and thermal output, thus allowing it to run cool in slim notebooks and do away with fans. But the limitation here is that the computer doesn’t have the same kind of graphics performance compared to a fully-fledged GeForce GTX1060 setup which would be deployed in the larger gaming laptops.

Lower-tier packages will run with mechanical hard drives while the better packages will offer use of hybrid hard disks (increased solid-state cache), solid-state drives or dual-drive setups with the system drive (C drive with operating system) being a solid-state device and data being held on a 1Tb hard disk known as the D drive.

I would see these machines serving as a high-performance solo computer for people like college / university students who want to work with high-end games or put their foot in to advanced graphics work. As well, I wouldn’t put it past Lenovo, HP and others to run with budget-priced high-performance gaming laptops in order to compete with Dell in courting this market segment.

What is the new HEIF image file format about?

Apple iPad Pro 9.7 inch press picture courtesy of Apple

An iPhone or iPad running iOS 11 has native support for the HEIF image file format

A new image file format has started to surface over the last few years but it is based on a different approach to storing image files.

This file format, known as the HEIF or High Efficiency Image Format, is designed and managed by the MPEG group who have defined a lot of commonly-used video formats. It is being seen by some as the “still-image version of HEVC” with HEVC being the video codec used for 4K UHDTV video files. But it uses HEVC as the preferred codec for image files and will provide support for newer and better image codecs, including newer codecs offering lossless image compression in a similar vein to what FLAC offers for audio files.

Unlike JPEG and the other image files that have existed before it, HEIF is seen as a “container file” for multiple image and other objects rather than just one image file and some associated metadata. As well, the HEIF file format and the HEVC codec are designed to take advantage of today’s powerful computing hardware integrated in our devices.

The primary rule here for HEIF is that it is a container file format speci

Simple concept view of the HEIF image file format

fically for collections of still images. It is not really about replacing one of the video container file formats like MP4 or MKV which are specifically used primarily for video footage.

What will this mean?

One HEIF file could contain a collection of image files such as “mapping images” to improve image playback in certain situations. It can also contain the images taken during a “burst” shot where the camera takes a rapid sequence of images. This can also apply with image bracketing where you take a sequence of shots at different exposure, focus or other camera settings to identify an ideal image setup or create an advanced composite photograph.

This leads to HEIF dumping GIF as the carrier for animated images that are often provided on the Web. Here, you could use software to identify a sequence of images to be played like a video, including having them repeat in a loop thanks to the addition of in-file metadata. This emulates what the Apple Live Photos function was about with iOS and can allow users to create high-quality animations, cinemagraphs (still photos with a small discreet looping animation) or slide-shows in the one HEIF file.

HEIF uses the same codec associated with 4K UHDTV for digital photos

There is also the ability to store non-image objects like text, audio or video in an HEIF file along with the images which can lead to a lot of things. For example, you could set a smartphone to take a still and short video clip at the same time like with Apple Live Photos or you could have images with text or audio notes. On the other hand, you could attach “stamps”, text and emoji to a series of photos that will be sent as an email or message like what is often done with the “stories” features in some of the social networks. In some ways it could be seen as a way to package vector-graphics images with a “compatibility” or “preview” bitmap image.

The HEIF format will also support non-destructive metadata-based editing where this editing is carried out using metadata that describes rectangular crops or image rotations. This is so you could revise an edit at a later time or obtain another edit from the same master image.

It also leads to the use of “derived images” which are the results of one of these edits or image compositions like an HDR or stitched-together panorama image. These can be generated at the time the file is opened or can be created by the editing or image management software and inserted in the HEIF file with the original images. Such a concept could also extend to the rendering and creation of a video object that is inserted in the HEIF file.

HEIF makes better use of advanced photo options offered by advanced cameras

Here, having a derived image or video inserted in the HEIF file can benefit situations where the playback or server setup doesn’t have enough computing power to create an edit or composition of acceptable quality in an acceptable timeframe. Similarly, it could answer situations where the software used either at the production/editing, serving or playback devices does a superlative job of rendering the edits or compositions.

The file format even provides alternative viewing options for the same resource. For example, a user could see a wide-angle panorama image based on a series of shots as either a wide-aspect-ration image or a looping video sequence representing the camera panning across the scene.

What kind of support exists for the HEIF format?

At the moment, Apple’s iOS 11, tvOS 11 (Apple TV) and MacOS High Sierra provide native support for the HEIF format. The new iPhone and iPad provide hardware-level support for the HEVC codec that is part of this format and the iOS 11 platform along with the iCloud service provides inherent exporting of these images for email and other services not implementing this format.

Microsoft is intending to integrate HEIF in to Windows 10 from the Spring Creators Update onwards. As well, Google is intending to bake it in to the “P” version of Android which is their next feature version of that mobile platform.

As for dedicated devices like digital cameras, TVs and printers; there isn’t any native support for HEIF due to it being a relatively new image format. But most likely this will come about through newer devices or devices acquiring newer software.

Let’s not forget that newer image player and editing / management software that is continually maintained will be able to work with these files. The various online services like Dropbox, Apple iCloud or Facebook are or will be offering differing levels of HEIF image-file support the the evolution of their platform. Depending on the service, this will be to hold the files “as-is” or to export and display them in a “best-case” format.

There will be some compatibility issues with hardware and software that doesn’t support this format. This may be rectified with operating systems, image editing / management software or online services that offer HEIF file-conversion abilities. Such software will need to export particular images or derivative images in an HEIF file as a JPEG image for stills or a GIF, MP4 or QuickTime MOV file for timed sequences, animations and similar material.

In the context of DLNA-based media servers, it may be about a similar approach to an online service where the media server has to be able to export original or derived images from one of these files held on a NAS as a JPEG still image or a GIF or MP4 video file where appropriate.

Conclusion

As the container-based HEIF image format comes on to the scene as the possible replacement for JPEG and GIF image files, it could be about an image file format that shows promise for both the snapshooter or the advanced photographer.

What is the “hybrid radio” concept all about?

Pure Sensia 200D Connect Internet radio

Pure Sensia 200D Connect Internet radio – a representative of the current trend towards the “hybrid radio” concept

There is some interest in the concept of “hybrid radio” as a possible trend to affect broadcast radio in the online era.

Regular readers of this site will have seen reviews that I have done of Internet radios. These are radios and audio equipment the can pull in audio content via Internet-radio streams and, in most cases, local broadcast radio delivered via FM, AM and/or DAB+ digital radio. This is in addition to access to various online audio services like Pandora or Spotify or DLNA-capable content hosts on your home network.

The Internet-radio streams may be programs only available via the Internet or simulcasts of radio content broadcast in the radio station’s broadcast area using the traditional methods. They are usually selected through a directory like TuneIn Radio, Radioline or vTuner and their appeal has been to allow access to radio content via devices that only have Internet connection like smartphones, or to provide access to “out-of-area” radio content. This latter factor has a strong appeal for expats, language learners or people with a soft spot for a particular city or country including those of us who like a particular radio talent or programme available in that area but not locally.

It is also being augmented through access to podcasts or on-demand audio through the various Internet-radio directories or through online audio services, with most of the broadcasters making their own podcasts or similar content that they produce.

But there are efforts being taken towards improving the user experience for this class of device, especially where a set is capable of receiving content through traditional broadcast radio and the Internet. The typical user experience is to provide Internet radio as its own “band” or “source” on these devices.

The RadioDNS organisation is behind these “hybrid radio” efforts as a way to make the traditional radio become relevant to the Millennial generation who lives by their iPhones. It is also bringing the Internet radio concept towards automotive use, especially in a simplified manner that keeps the driver’s hands on the wheel and eyes on the road as much as possible.

“Single-dial” tuning

One of the goals is to provide a “single dial” approach for locating radio stations. This is where you can see a list of local or overseas broadcasters and the set tunes in to that broadcast using the best method available for that broadcast.

Here, the set would choose the local broadcast medium if its radio tuner determines that the signal is strong enough for reliable reception, otherwise it would choose the Internet stream. It would also take advantage of the “follow this station” functionality in FM-RDS or DAB+ to choose the closest strongest transmitter for that station, something that would be important for national or regional radio networks with many transmitters or local stations that run infill transmitters to cover dead zones in their area..

There will be the ability to search for stations based on certain criteria like location, content genre, or station identifier, a feature that every one of us who uses a DAB/DAB+ digital, RDS-capable FM or Internet radio have benefited from. But this will be augmented by logo-driven browsing where the station’s familiar logo is shown on the set’s display or station logos appear as part of a station list.

The preset-station concept where you have access to your favourite stations at the touch of a button is augmented by the “single-dial” tuning process. Here, each preset-station space, represented by a button or menu list item, would point to a station and when you select that station, the set would choose the best method of receiving it whether broadcast or Internet.

It could then lead towards the idea of grouping radio station presets in order to suit particular users’ station preferences or occasion-based station lists. An example that comes to my mind where this feature would earn its keep is a close friend of mine who had regularly looked after some school children through the school holidays.

Here, the friend personally liked serious radio content like classical music or talk radio from the public-service radio stations. But when these children travelled with her, they listened to the local commercial (private) FM stations that ran pop music because they listened to these broadcasters when they are at home or in the car with their parents. In this situation, it could be feasible to allocate one preset group to the serious radio content and another to the popular-music stations, then call up this group when the children are travelling with the driver.

Such a feature will be considered highly relevant for automotive and portable receivers because these sets are more likely to move between different reception conditions. As well, it leads towards broadcasting and programming approaches that are totally independent of the medium that is being used to carry the broadcast programme.

A rich radio-listening experience

Once you are listening to your favourite programme, the “hybrid radio” experience will be about augmenting what you are listening to.

A news broadcast could be supplied with a written summary of the key items in that bulletin. Similarly, a weather report could benefit from visual information like a map or a chart that shows what the weather will be like over a time period. Even traffic reports could be augmented with maps that show where the traffic jams or closed-off roads are, giving you a fair idea of where possible rat-runs could be taken.

Talkback and other deep-reporting shows could benefit from links to online resources that are relevant to what is being talked about.  It could even be feasible to “throw” contact details like hotline numbers or studio lines to one’s smartphone using one or more methods like a QR code shown on the set’s display.

Let’s not forget that the “hybrid radio” concept can also be about gaining access to these kind of shows in an “on-demand” manner similar to podcasts. This could even allow a person who heard one of these shows to set things up so that future episodes of the show can be saved locally for listening at one’s convenience. If the station primarily syndicates this content from other producers like podcasters or content-producing organisations for their own broadcast, they could then use the “hybrid radio” arrangement to allow listeners to find the shows as on-demand material.

For music radio programmes, there would be the ability to show the details about the song or piece of music currently playing. Some vendors could take this further by implementing a Shazam-style “buy this” option for the current track in conjunction with one or more “download-to-own” music stores, or to replay the song through an “online jukebox” like Spotify.

There is the ability for advertising-driven radio stations to allow their advertisers and sponsors to offer more than the 30-second radio commercial. Here, they could provide weblinks to the advertiser’s online resources so listeners can act on the advertised offers. This can also extend to online couponing or the ability to book one’s place at a concert or music festival that an artist whose song is currently playing is performing at.

Conclusion

The “hybrid radio” concept could be about simplifying access to radio broadcasts in a media-independent manner then allowing listeners to get the best value out of them.

Voice-driven assistants at risk of nuisance triggering

Article

Amazon Echo on kitchen bench press photo courtesy of Amazon USA

A problem that showed up with the Amazon Echo’s always-listen behaviour was nuisance triggering for the laughter command

Voice control is no laughing matter | Videonet

My Comments

An issue that has been raised recently is the risk of a voice-driven assistant like Apple’s Siri, Amazon’s Alexa, Google’s Assistant or Microsoft’s Cortana being triggered inadvertently and becoming of nuisance value.

This was discovered with Amazon’s Echo devices where you could say “Alexa, laugh” and Alexa would laugh in response. But if this was said in conversation or through audio or video content you had playing in the background, this could come across very creepy. A similar situation was discovered in 2014 with Microsoft’s XBox when there was a voice-search functionality built in to in and you would wake it by saying “XBox on!”, This was aggravated if, for example, a TV commercial from a consumer electronics outlet was playing and the adman announced a special deal on one of these consoles by saying something like “XBox On Special” or “XBox On Saie” which contain this key phrase.

Similarly, we are starting to see “voice-driven search” become a part of consumer electronics and this could become of an annoyance whenever dialogue in a movie or TV show or an adman’s talking in a TV commercial could instigate a search routine during your TV viewing.

But there are some implementations of these voice assistants that don’t start automatically when they hear your “wake phrase” associated with them like “Alexa” or “Hi Siri”. In these cases, you would press a “call” button to make the device ready to listen to you. This typically happens with smartphones, tablets, computers or smart-TV remote controls.

On the other hand, some of the smart speakers like Google Home use a microphone-mute button which you would activate if there is a risk of nuisance triggering. In this mode, the device’s microphone isn’t active until you manually disable it.

Google Home uses a microphone-mute button to control the mic

Personally, I would still like to see some form of manual control offered as the norm for these devices, preferably in the form of a “call” button with a distinct tactile feel when pressed. Then you would see a different light glow or other visual cue when the device is ready to talk to. Here, the user has some form of control over when the device can listen to them thus assuring their privacy.

Here, the article underscored the role of speech as part of a user interface that integrated one of many different interaction types like touch or vision. This then provides different comfort zones that the user can benefit from when using the device and they then rely on what’s comfortable to them.

YouTube to share fact-checking resources when showing conspiracy videos

Articles

YouTube will use Wikipedia to fact-check internet hoaxes | FastCompany

YouTube plans ‘information cues’ to combat hoaxes | Engadget

YouTube will add Wikipedia snippets to conspiracy videos | CNet

My Comments

YouTube home page

YouTube – now also being used to distribute conspiracy theory material

Ever since the mid-1990s when the Internet became common, conspiracy theorists used various Internet resources to push their weird and questionable “facts” upon the public. One of the methods being used of late is to “knock together” videos and run these as a “channel” on the common video sharing platform that is YouTube.

But Google who owns YouTube now announced at the SXSW “geek-fest” in Austin, Texas that they will be providing visual cues on the YouTube interface to highlight fact-checking resources like Wikipedia, everyone’s favourite “argument-settling” online encyclopaedia. These will appear when known conspiracy-theory videos or channels are playing and most likely will accompany the Web-based “regular-computer” experience or the mobile experience.

Wikipedia desktop home page

Wikipedia no being seen as a tool for providing clear facts

It is part of an effort that Silicon Valley is undertaking to combat the spread of fake news and misinformation, something that has become stronger over the last few years due to Facebook and co being used as a vector for spreading this kind of information. Infact, Google’s “news-aggregation” services like Google News (news.google.com) implements tagging of resources that come up regarding a news event and they even call out “fact-check” resources as a way to help with debunking fake news.

Australian government to investigate the role of Silicon Valley in news and current affairs

Articles

Facebook login page

Facebook as a social-media-based news aggregator

Why the ACCC is investigating Facebook and Google’s impact on Australia’s news media | ABC News (Australia)

ACCC targets tech platforms | InnovationAus.com

World watching ACCC inquiry into dominant tech platforms | The Australian (subscription required)

Australia: News and digital platforms inquiry | Advanced Television

My Comments

A question that is being raised this year is the impact that the big technology companies in Silicon Valley, especially Google and Facebook, are having on the global media landscape. This is more so in relationship to established public, private and community media outlets along with the sustainability for these providers to create high-quality news and journalistic content especially in the public-affairs arena.

Google News - desktop Web view

Google News portal

It is being brought about due to the fact that most of us are consuming our news and public-affairs content on our computers, tablets and smartphones aided and abetted through the likes of Google News or Facebook. This can extend to things like use of a Web portal or “news-flash” functionality on a voice-driven assistant.

This week, the Australian Competition and Consumer Commission have commenced an inquiry into Google and Facebook in regards to their impact on Australian news media. Here, it is assessing whether there is real sustainable competition in the media and advertising sectors.

Google Home and similar voice-driven home assistants becoming another part of the media landscape

There is also the kind of effect Silicon Valley is having on media as far as consumers (end-users), advertisers, media providers and content creators are concerned. It also should extend to how this affects civil society and public discourse.

It has been brought about in response to the Nick Xenophon Team placing the inquiry as a condition of their support for the passage of Malcolm Turnbull’s media reforms through the Australian Federal Parliament.

A US-based government-relations expert saw this inquiry as offering a global benchmark regarding how to deal with the power that Silicon Valley has over media and public opinion with a desire for greater transparency between traditional media and the big tech companies.

Toni Bush, executive vice president and global head of government affairs, News Corporation (one of the major traditional-media powerhouses of the world) offered this quote:

“From the EU to India and beyond, concerns are rising about the power and reach of the dominant tech platforms, and they are finally being scrutinised like never before,”

What are the big issues being raised in this inquiry?

One of these is the way Google and Facebook are offering news and information services effectively as information aggregators, This is either in the form of providing search services with Google ending up as a generic trademark for searching for information on the Internet; or social-media sharing in the case of Facebook. Alongside this is the provisioning of online advertising services and platforms for online media providers both large and small. This is infact driven by data which is being seen as the “new oil” of the economy.

A key issue often raised is how both these companies and, to some extent, other Silicon Valley powerhouses are changing the terms of engagement with content providers without prior warning. This is often in the form of a constantly-changing search algorithm or News Feed algorithm; or writing the logic behind various features like Google Accelerated Mobile Pages or Facebook Instant Articles to point the user experience to resources under their direct control rather than the resources under the control of the publisher or content provider. These issues relate to the end user having access to the publisher’s desktop or mobile user experience which conveys that publisher’s branding or provides engagement and monetisation opportunities for the publisher such as subscriptions, advertising or online shopfronts..

This leads to online advertising which is very much the direction of a significant part of most businesses’ advertising budgets. What is being realised is that Google has a strong hand in most of the online search, display and video advertising, whether through operating commonly-used ad networks like Adsense,  Adwords or the Google Display Network; or through providing ad management technology and algorithms to ad networks, advertisers and publishers.

In this case, there are issues relating to ad visibility, end-user experience, brand safety, and effective control over content.

This extends to what is needed to allow a media operator to sustainably continue to provide quality content. It is irrespective of whether they are large or small or operating as a public, private or community effort.

Personally I would like to see it extend to small-time operators such as what represents the blogosphere including podcasters and “YouTubers” being able to create content in a sustainable manner and able to “surface above the water”. This can also include whether traditional media could use material from these sources and attribute and renumerate their authors properly, such as a radio broadcaster syndicating a highly-relevant podcast or a newspaper or magazine engaging a blogger as a freelance columnist.

Other issues that need to be highlighted

I have covered on this site the kind of political influence that can be wielded through online media, advertising and similar services. It is more so where the use of these platforms in the political context is effectively unregulated territory and can happen across different jurisdictions.

One of these issues was use of online advertising platforms to run political advertising during elections or referendums. This can extend to campaign material being posted as editorial content on online resources at the behest of political parties and pressure groups.

Here, most jurisdictions want to maintain oversight of these activity under the context of overseeing political content that could adversely influence an election and the municipal government in Seattle, Washington want to regulate this issue regarding local elections. This can range from issues like attribution of comments and statements in advertising or editorial material through the amount of time the candidates have to reach the electorate to mandatory blackouts or “cooling-off” periods for political advertising before the jurisdiction actually goes to the polls.

Another issue is the politicisation of responses when politically-sensitive questions are being posed to a search engine or a voice-driven assistant of the Amazon Alexa, Apple Siri or Google Assistant kind. Here, the issue with these artificial-intelligence setups is that they could be set up to provide biased answers according to the political agenda that the company behind the search engine, voice-driven assistant or similar service is behind.

Similarly, the issue of online search and social-media services being used to propagate “fake news” or propaganda disguised as news is something that will have to be raised by governments. It has become a key talking point over the past two years in relationship with the British Brexit referendum, the 2016 US Presidential election and other recent general elections in Europe. Here, the question that could be raised is whether Google and Facebook are effectively being “judge, jury and executioner” through their measures  or whether traditional media is able to counter the effective influence of fake news.

Conclusion

What is happening this year is that the issue of how Silicon Valley and its Big Data efforts is able to skew the kind of news and information we get. It also includes whether the Silicon Valley companies need to be seen as another influential media company and what kind if regulation is needed in this scenario.

British users can benefit from Google Home Voice Calling

Article

Google Home voice calling now works in the UK

Google brings voice calling to Home speakers in the UK | Engadget

My Comments

Last August, Google launched in to the USA and Canada a VoIP-based voice calling feature for their Home smart-speaker platform. This service allowed you to use your voice to call landline and mobile telephones in the USA and Canada and speak to these callers through your Home smart speaker. A limitation that it had was that it was only for outgoing calls and your caller couldn’t identify you through the Caller-ID framework.

It was an attempt by Google to answer Amazon’s “in-network” calling and messaging service that they were delivering to the Alexa platform. Subsequently Amazon answered Google by providing a similar service with an analogue telephony adaptor that connects to your phone line to provide the full gamut of phone functionality through your Echo speakers.

UK FlagNow Google have taken this feature and launched it in the UK so that people who live there have the ability to call landline and mobile numbers based in that territory. How I see this is Google being the first off the mark to offer a VoIP telephony service based around their voice-driven home assistant within the UK.

There, the idea of a household landline telephone is still being kept alive thanks to most of the popular telcos and ISPs running desirable multiple-play TV/telephony/Internet packages at very attractive prices, a similar practice being offered in some European countries especially France. As w

They are also launching this feature in time for Mother’s Day (Mothering Sunday) which is celebrated in March in the UK. Here, they are running a spot special on their coral-coloured Google Home Mini speaker by dropping the price by GBP£10 to GBP£39 until March 12. This is with it being available through most of the UK’s main electrical-store chains like Currys PC World or John Lewis.

The UK as being the first country beyond the USA and Canada to head towards a VoIP platform based around a voice-driven home assistant could be the first “stage” in a race between Google and Amazon to push this feature across the world.

The trends affecting personal-computer graphics infrastructure

Article

AMD Ryzen CPUs with integrated Vega graphics are great for budget-friendly PC gaming | Windows Central

My Comments

Dell Inspiron 13 7000 2-in-1 Intel 8th Generation CPU at QT Melbourne hotel

Highly-portable computers of the same ilk as the Dell Inspiron 13 7000 2-in-1 will end up with highly-capable graphics infrastructure

A major change that will affect personal-computer graphics subsystems is that those subsystems that have a highly-capable graphics processor “wired-in” on the motherboard will be offering affordable graphics performance for games and multimedia.

One of the reasons is that graphics subsystems that are delivered as an expansion card are becoming very pricey, even ethereally expensive, thanks to the Bitcoin gold rush. This is because the GPUs (graphics processors) on the expansion cards are being used simply as dedicated computational processors that are for mining Bitcoin. This situation is placing higher-performance graphics out of the reach of most home and business computer users who want to benefit from this feature for work or play.

But the reality is that we will be asking our computers’ graphics infrastructure to realise images that have a resolution of 4K or more with high colour depths and dynamic range on at least one screen. There will even be the reality that everyone will be dabbling in games or advanced graphics work at some point in their computing lives and even expecting a highly-portable or highly-compact computer to perform this job.

Integrated graphics processors as powerful as economy discrete graphics infrastructure

One of the directions Intel is taking is to design their own integrated graphics processors that use the host computer’s main RAM memory but have these able to serve with the equivalent performance of a baseline dedicated graphics processor that uses its own memory. It is also taking advantage of the fact that most recent computers are being loaded with at least 4Gb system RAM, if not 8Gb or 16Gb. This is to support power economy when a laptop is powered by its own battery, but these processors can even support some casual gaming or graphics tasks.

Discrete graphics processors on the same chip die as the computer’s main processor

Intel Corporation is introducing the 8th Gen Intel Core processor with Radeon RX Vega M Graphics in January 2018. It is packed with features and performance crafted for gamers, content creators and fans of virtual and mixed reality. (Credit: Walden Kirsch/Intel Corporation)

This Intel CPU+GPU chipset will be the kind of graphics infrastructure for portable or compact enthusiast-grade or multimedia-grade computers

Another direction that Intel and AMD are taking is to integrate a discrete graphics subsystem on the same chip die (piece of silicon) as the CPU i.e. the computer’s central “brain” to provide “enthusiast-class” or “multimedia-class” graphics in a relatively compact form factor. It is also about not yielding extra heat nor about drawing on too much power. These features are making it appeal towards laptops, all-in-one computers and low-profile desktops such as the ultra-small “Next Unit of Computing” or consumer / small-business desktop computers, where it is desirable to have silent operation and highly-compact housings.

Both CPU vendors are implementing AMD’s Radeon Vega graphics technology on the same die as some of their CPU designs.

Interest in separate-chip discrete graphics infrastructure

Dell Inspiron 15 Gaming laptop

The Dell Inspiron 15 7000 Gaming laptop – the kind of computer that will maintain traditional soldered-on discrete graphics infrastructure

There is still an interest in discrete graphics infrastructure that uses its own silicon but soldered to the motherboard. NVIDIA and AMD, especially the former, are offering this kind of infrastructure as a high-performance option for gaming laptops and compact high-performance desktop systems; along with high-performance motherboards for own-build high-performance computer projects such as “gaming rigs”. The latter case would typify a situation where one would build the computer with one of these motherboards but install a newer better-performing graphics card at a later date.

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module press picture courtesy of Sonnet Systems

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables

This same option is also being offered as part of the external graphics modules that are being facilitated thanks to the Thunderbolt 3 over USB-C interface. The appeal of these modules is that a highly-portable or highly-compact computer can benefit from better graphics at a later date thanks to one plugging in one of these modules. Portable-computer users can benefit from the idea of working with high-performance graphics where they use it most but keep the computer lightweight when on the road.

Graphics processor selection in the operating system

For those computers that implement multiple graphics processors, Microsoft making it easier to determine which graphics processor an application is to use with the view of allowing the user to select whether the application should work in a performance or power-economy mode. This feature is destined for the next major iteration of Windows 10.

Here, it avoids the issues associated with NVIDIA Optimus and similar multi-GPU-management technologies where this feature is managed with an awkward user interface. They are even making sure that a user who runs external graphics modules has that same level of control as one who is running a system with two graphics processors on the motherboard.

What I see now is an effort by the computer-hardware industry to make graphics infrastructure for highly-compact or highly-portable computers offer similar levels of performance to baseline or mid-tier graphics infrastructure available to traditional desktop computer setups.

Blockchain, NFC and QR codes work together as a tamper-evident seal for food

Article

Blockchain ensures that your online baby food order is legit | CNet

Video – Click or tap to play

My Comments

On this Website, I have previously covered how certain technologies that work with our smartphones are being used to verify the authenticity and provenance of various foodstuffs and premium drinks.

It has been in the form of NFC-enabled bottle tops used on some premium liquor along with smartphone apps to determine if the drink was substituted along with the supplier being able to provide more information to the customer. In France, the QR code has been used as a way to allow consumers to identify the provenance of processed meat sold at the supermarket in response to the 2013 horsemeat scandal that affected the supply of processed beef and beef-based “heat-and-eat” foods in Europe.

The problem of food and beverage adulteration and contamination is rife in China and other parts of Asia but has happened around other parts of the world such as the abovementioned horsemeat crisis and there is a perpetual question for the US market regarding whether extra-virgin olive oil is really extra-virgin. It can extend to things like whether the produce is organic or, in the case of eggs or meat, whether these were free-range or not. This has led various technologists to explore the use of IT technologies to track the authenticity and provenance of what ends up in our fridges, pantries or liquor cabinets.

The latest effort is to use blockchain which is the “distributed ledger” technology that makes bitcoin, Ethereum and other cryptocurrencies tick. This time, it is used in conjunction with NFC, QR codes and mobile-platform native apps to create an electronic “passport” for each packaged unit of food or drink. This was put together by a Chinese-based startup who created this technology in response to a cat belonging to one of the founders needing to go to the vet after eating contaminated food that the founder had bought from an eBay-like online market based in China.

The initial setup has a tamper-evident seal wrapped around the tin or other packaging with this seal having an NFC element and a QR code printed on it. A smartphone app is used to scan the QR code and it uses the NFC element which fails once the seal is broken to verify that the seal is still intact. Once this data is read on the mobile device, the food item’s electronic “passport” then appears showing what was handled where in the production chain.

At the moment, the seal is like a hospital bracelet which is sturdy enough to be handled through the typical logistics processes but is fragile enough to break if the food container is opened. This could work with most packaged foodstuffs but food suppliers could then design this technology to work tightly with their particular kind of packaging.

The blockchain-driven “passport” could be used to identify which farm was used as the source of the produce concerned, with a human-readable reference regarding the agricultural techniques used i.e. organic or free-range techniques being used. In the case of processed meat and meat-based foods, the technology can be used to verify the kind and source of the meat used. This is important for religious or national cultures where certain meats are considered taboo like the Muslim and Jewish faiths with pig-based meats, British and Irish people with horsemeat or Australians with kangaroo meat.

Once the various packaging-technology firms improve and implement these technologies, it could facilitate how we can be sure that we aren’t being sold a “pig in a poke” when we buy food for ourselves or our pets.