Current and Future Trends Archive

Do I see regular computing target i86 and ARM microarchitectures?

Lenovo Yoga 5G convertible notebook press image courtesy of Lenovo

Lenovo Flex 5G / Yoga 5G convertible notebook which runs Windows on Qualcomm ARM silicon – the first laptop computer to have 5G mobile broadband on board

Increasingly, regular computers are moving towards the idea of having processor power based around either classic Intel (i86/i64) or ARM RISC microarchitectures. This is being driven by the idea of portable computers heading towards the latter microarchitecture as a power-efficiency measure with this concept driven by its success with smartphones and tablets.

It is undertaking a different approach to designing silicon, especially RISC-based silicon, where different entities are involved in design and manufacturing. Previously, Motorola was taking the same approach as Intel and other silicon vendors to designing and manufacturing their desktop-computing CPUs and graphics infrastructure. Now ARM have taken the approach of designing the microarchitecture themselves and other entities like Samsung and Qualcomm designing and fabricating the exact silicon for their devices.

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

Apple to move the Macintosh platform to their own ARM RISC silicon

A key driver of this is Microsoft with their Always Connected PC initiative which uses Qualcomm ARM silicon similar to what is used in a smartphone or tablet. This is to have the computer able to work on basic productivity tasks for a whole day without needing to be on AC power. Then Apple intended to pull away from Intel and use their own ARM-based silicon for their Macintosh regular computers, a symptom of them going back to the platform’s RISC roots but not in a monolithic manner.

As well, the Linux community have established Linux-based operating systems on ARM microarchitectore. This has led to Google running Android on ARM-based mobile and set-top devices and offering a Chromebook that uses ARM silicon; along with Apple implementing it in their operating systems. Not to mention the many NAS devices and other home-network hardware that implement ARM silicon.

Initially the RISC-based computing approach was about more sophisticated use cases like multimedia or “workstation-class” computing compared to basic word-processing and allied computing tasks. Think of the early Apple Macintosh computers, the Commodore Amiga with its many “demos” and games, or the RISC/UNIX workstations like the Sun SPARCStation that existed in the late 80s and early 90s. Now it is about power and thermal efficiency for a wide range of computing tasks, especially where portable or low-profile devices are concerned.

Software development

Already mobile and set-top devices use ARM silicon

I will see an expectation for computer operating systems and application software to be written and compiled for both classic Intel i86 and ARM RISC microarchitectures.  This will require software development tools to support compiling and debugging on both platforms and, perhaps, microarchitecture-agnostic application-programming approaches.  It is also driven by the use of ARM RISC microarchitecture on mobile and set-top/connected-TV computing environments with a desire to allow software developers to have software that is useable across all computing environments.

WD MyCloud EX4100 NAS press image courtesy of Western Digital

.. as do a significant number of NAS units like this WD MyCloud EX4100 NAS

Some software developers, usually small-time or bespoke-solution developers, will end up using “managed” software development environments like Microsoft’s .NET Framework or Java. These will allow the programmer to turn out a machine-executable file that is dependent on pre-installed run-time elements for it to run. These run-time elements will be installed in a manner that is specific to the host computer’s microarchitecture and make use of the host computer’s needs and capabilities. These environments may allow the software developer to “write once run anywhere” without knowing if the computer  the software is to run on uses an i86 or ARM microarchitecture.

There may also be an approach towards “one-machine two instruction-sets” software development environments to facilitate this kind of development where the goal is to simply turn out a fully-compiled executable file for both instruction sets.

It could be in an accepted form like run-time emulation or machine-code translation as what is used to allow MacOS or Windows to run extant software written for different microarchitectures. Or one may have to look at what went on with some early computer platforms like the Apple II where the use of a user-installable co-processor card with the required CPU would allow the computer to run software for another microarchitecture and platform.

Computer Hardware Vendors

For computer hardware vendors, there will be an expectation towards positioning ARM-based silicon towards high-performance power-efficient computing. This may be about highly-capable laptops that can do a wide range of computing tasks without running out of battery power too soon. Or “all-in-one” and low-profile desktop computers will gain increased legitimacy when it comes to high-performance computing while maintaining the svelte looks.

Personally, if ARM-based computing was to gain significant traction, it may have to be about Microsoft encouraging silicon vendors other than Qualcomm to offer ARM-based CPUs and graphics processors fit for “regular” computers. As well, Microsoft and the Linux community may have to look towards legitimising “performance-class” computing tasks like “core” gaming and workstation-class computing on that microarchitecture.

There may be the idea of using 64-bit i86 microarchitecture as a solution for focused high-performance work. This may be due to a large amount of high-performance software code written to run with the classic Intel and AMD silicon. It will most likely exist until a significant amount of high-performance software is written to run natively with ARM silicon.

Conclusion

Thanks to Apple and Microsoft heading towards ARM RISC microarchitecture, the computer hardware and software community will have to look at working with two different microarchitectures especially when it comes to regular computers.

Send to Kindle

A digital watermark to identify the authenticity of news photos

Articles

ABC News 24 coronavirus coverage

The news services that appear on the “screen of respect” that is main TV screen like the ABC are often seen as being “of respect” and all the screen text is part of their identity

TNI steps up fight against disinformation  | Advanced Television

News outlets will digitally watermark content to limit misinformation | Engadget

News Organizations Will Start Using Digital Watermarks To Combat Fake News |Ubergizmo

My Comments

The Trusted New Initiative are a recently formed group of global news and tech organisations, mostly household names in these fields, who are working together to stop the spread of disinformation where it poses a risk of real-world harm. It also includes flagging misinformation that undermines trust the the TNI’s partner news providers like the BBC. Here, the online platforms can review the content that comes in, perhaps red-flagging questionable content, and newsrooms avoid blindly republishing it.

ABC News website

.. as well as their online presence – they will benefit from having their imagery authenticated by a TNI watermark

One of their efforts is to agree on and establish an early-warning system to combat the spread of fake news and disinformation. It is being established in the months leading up to the polling day for the US Presidential Election 2020 and is flagging disinformation were there is an immediate threat to life or election integrity.

It is based on efforts to tackle disinformation associated with the 2019 UK general election, the Taiwan 2020 general election, and the COVID-19 coronavirus plague.

Another tactic is Project Origin, which this article is primarily about.

An issue often associated with fake news and disinformation is the use of imagery and graphics to make the news look credible and from a trusted source.

Typically this involves altered or synthesised images and vision that is overlaid with the logos and other trade dress associated with BBC, CNN or another newsroom of respect. This conveys to people who view this online or on TV that the news is for real and is from a respected source.

Project Origin is about creating a watermark for imagery and vision that comes from a particular authentic content creator. This will degrade whenever the content is manipulated. It will be based around open standards overseen by TNI that relate to authenticating visual content thus avoiding the need to reinvent the wheel when it comes to developing any software for this to work.

One question I would have is whether it is only readable by computer equipment or if there is a human-visible element like the so-called logo “bug” that appears in the corner of video content you see on TV. If this is machine-readable only, will there be the ability for a news publisher or broadcaster to overlay a graphic or message that states the authenticity at the point of publication. Similarly, would a Web browser or native client for an online service have extra logic to indicate the authenticity of an image or video footage.

I would also like to see the ability to indicate the date of the actual image or footage being part of the watermark. This is because some fake news tends to be corroborated with older lookalike imagery like crowd footage from a similar but prior event to convince the viewer. Some of us may also look at the idea of embedding the actual or approximate location of the image or footage in the watermark.

There is also the issue of newsrooms importing images and footage from other sources whose equipment they don’t have control over. For example, an increasing amount of amateur and videosurveillance imagery is used in the news usually because the amateur photographer or the videosurveillance setup has the “first images” of the news event. Then there is reliance on stock-image libraries and image archives for extra or historical footage; along with newsrooms and news / PR agencies sharing imagery with each other. Let’s not forget media companies who engage “stringers” (freelance photographers and videographers) who supply images and vision taken with their own equipment.

The question with all this, especially with amateur / videosurveillance / stringer footage taken with equipment that media organisations don’t have control over is how such imagery can be authenticated by a newsroom. This is more so where the image just came off a source like someone’s smartphone or the DVR equipment within a premises’ security room. There is also the factor that one source could tender the same imagery to multiple media outlets, whether through a media-relations team or simply offering it around.

At least Project Origin will be useful as a method to allow the audience to know the authenticity and provenance of imagery that is purported to corroborate a newsworthy event.

Send to Kindle

Freebox routers to support WPA3 Wi-Fi security through a firmware update

Article – French language / Langue Française

Freebox Révolution - courtesy Iliad.fr

A firmware update will give WPA3 Wi-Fi security to the Freebox Révolution and newer Freebox devices

Mise à jour du Freebox Server (Révolution/mini/One/Delta/Pop) 4.2.0 | Freebox.fr Blog

My Comments

Free.fr have pushed forward the idea of using a firmware update to deliver the WPA3 Wi-Fi network security standard to recent Freebox Server modem-routers that are part of their Freebox Internet service packages.

This is part of the FreeOS 4.2.0 major firmware update which also improves Wi-Fi network stability; implements QR-based device enrolment for the Wi-Fi network along with profile-driven parental control. It will apply to the Freebox Révolution which I see as the poster child of a highly-competitive French Internet service market and descendent devices like the mini, one, Delta and Pop.

The WPA3 functionality will be configured to work in WPA2+WPA3 compatibility mode to cater for extant WPA2 client devices that exist on the home network. This is because most home-network devices like printers or Internet radios won’t even have the ability to be updated to work with WPA3-secured networks.

At the moment, Free is rolling out updates to their mobile apps to support WPA3 on the mobile operating systems. It is most likely until Google, Apple and mobile-phone vendors offer WPA3 “out-of-the-box” with their smartphone and tablet platforms.

What I like of Free’s software-driven approach is that there is no need to replace the modem-router to have your network implement WPA3 Wi-Fi network security. It is very similar to what AVM did to enable distributed Wi-Fi functionality in a significant number of their FritzBox routers and other devices in their existing home-network product range where this function was part of a firmware upgrade.

It is avoiding the need for customers to purchase new hardware if they need to move to WPA3 network security and I would see this as a significant trend regarding European-designed home-network hardware where newer network capabilities are just a firmware update away.

Send to Kindle

Apple advises against Webcam shields on its newer Macbooks–could this be a trend that affects new low-profile laptops?

Article

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

Apple advises against using camera covers on their recent MacBooks.

Apple: Closing MacBooks with camera covers leads to display damage | Bleeping Computer

Previous coverage on HomeNetworking01.info

Keeping hackers away from your Webcam and microphone

My Comments

Apple has lately advised its MacBook owners to avoid buying and using accessory Webcam covers on their computers.

These Webcam shields are being seen as a security asset thanks to malware being used to activate the Webcam and microphone to surveil the computer’s user. But Apple advises against them due to the MacBook having the Webcam integrated with the circuitry for the screen and built in a very fragile manner. They also mention that the Webcam is used by macOS as an ambient light sensor and for advanced camera functionality.

Dell XPS 13 9360 8th Generation clamshell Ultrabook

with similar advice that could apply to other low-profile thin-bezel laptops like the Dell XPS 13

They recommend that if you use a device to obfuscate your Webcam, you use something as thin as a piece of ordinary printing paper and isn’t adhesive. This is because the adhesive can ruin your camera’s picture quality when you want to use it. As well, they recommend that you remove the camera-cover device before you close up your MacBook at the end of your computing session.

I also see this as a key trend that will affect other low-profile laptop computers like Ultrabooks and 2-in-1s that have very thin screen bezels like recent Dell XPS 13s. This is due to manufacturers designing the in-lid electronics in a more integrated manner so as to reduce the lid’s profile. Let’s not forget that with an increasing number of computers, the Webcam is part of facial-recognition-based device-level authentication if its operating system supports this function.

But you still need to protect your privacy when dealing with your laptop’s, all-in-one’s or monitor’s integrated Webcam and microphone.

Primarily, this is about proper computer housekeeping advice like making sure the computer’s operating system, applications, security software and any other software is up-to-date and with the latest security patches. As well, make sure that you know what is installed on your computer and that you don’t install software or click on links that you aren’t sure of.

You may find that your computer or monitor with the integrated Webcam will have some hardware security measures for that camera. This will be in the form of a shutter as used with some Lenovo equipment or a hardware switch that disables the camera as used with some HP equipment. Or the camera will have a tally light that glows when it is in use which is part of the camera’s hardware design. Here, make use of these features to protect your privacy. But you may find that these features may not affect what happens with your computer’s built-in microphone.

As well, you may find that your computer’s operating system or desktop security software has the ability to monitor or control which software has access to your Webcam, microphone or other sensors your computer is equipped with. Here, they may come with this functionality as part of a continual software update cycle. Let’s not forget that some Web browsers may bake camera-use detection in to their functionality as part of a major feature upgrade.

MacOS users should look at Apple’s support page for what they can do while Windows 10 users can look at Microsoft’s support page on this topic. Here, this kind of control is part of the fact that today’s desktop and mobile operating systems are being designed for security.

If your operating system or desktop security software doesn’t have this functionality, you may find third-party software for your computing platform that has oversight of your Webcam and microphone. One example for MacOS is Oversight which notifies you if the camera or microphone are being used, with the ability to detect software that “piggybacks” on to legitimate video-conferencing software to record your conversations. But you need to do some research about these apps before you consider downloading them.

Even if you are dealing with a recent MacBook or low-profile laptop computer, you can make sure your computer’s Webcam and integrated microphone isn’t being turned into a listening device.

Send to Kindle

More companies participate in Confidential Computing Consortium

Article

Facebook, AMD, Nvidia Join Confidential Computing Consortium | SDx Central

AMD, Facebook et Nvidia rejoignent une initiative qui veut protéger la mémoire vive de nos équipements  (AMD, NVIDIA and Facebook join an initiatiative to protect the live memory of our equipment) | O1Net.com (France – French language / Langue française)

From the horse’s mouth

Confidential Computing Consortium

Web site

My Comments

Some of online life’s household names are becoming part of the Confidential Computing Consortium. Here, AMD, Facebook, NVIDIA are part of this consortium which is a driver towards secure computing which is becoming more of a requirement these days.

What is the Confidential Computing Consortium

This is an industry consortium driven by the Linux Foundation to provide open standards for secure computing in all use cases.

It is about creating a standard software-development kits that are about secure software execution. This is to allow software to run in a hardware-based Trusted Execution Environment that is completely secure. It is also about writing this code to work independent of the system’s silicon manufacturer and to work across the common microarchitectures like ARM, RISC-V and x86.

This is becoming of importance nowadays with malware being written to take advantage of data being held within a computing device’s volatile random-access memory. One example of this include RAM-scraping malware targeted at point-of-sale / property-management systems that steal customers’ payment-card data while a transaction is in progress. Another example are the recent discoveries by Apple that a significant number of familiar iOS apps are snooping on the user’s iPhone or iPad Clipboard with their iPhones without the knowledge and consent of the user.

As well, in this day and age, most software implements various forms of “memory-to-memory” data transfer for many common activities like cutting and pasting. There is also the fact that an increasing number of apps are implementing context-sensitive functionality like conversion or translation for content that a user selects or even for something a user has loaded in to their device.

In most secure-computing setups, data is encrypted “in-transit” while it moves between computer systems and “at rest” while it exists on non-volatile secondary storage like mechanical hard disks or solid-state storage. But it isn’t encrypted while it is in use by a piece of computer software to fulfil that program’s purposes. This is leading to these kind of exploits like RAM-scraping malware.

The Confidential Computing Consortium is about encrypting the data that is held within RAM and allowing the user to grant software that they trust access to that encrypted data. Primarily it will be about consent-driven relevance-focused secure data use for the end-users.

But the idea is to assure not just the security and privacy of a user’s data but allow multiple applications on a server-class computer to run in a secure manner. This is increasingly important with the use of online services and cloud computing where data belonging to multiple users is being processed concurrently on the same physical computer.

This is even relevant to home and personal computing, including the use of online services and the Internet of Things. It is highly relevant with authenticating with online services or facilitating online transactions; as well as assuring end-users and consumers of data privacy. As well, most of us are heading towards telehealth and at-home care which involves the handling of more personally-sensitive information relating to our health through the use of common personal-computing devices.

The fact that Facebook is on board is due to the fact the social network’s users make use of social sign-on by that platform to sign up with or log in to various online services. In this case, it would be about protecting user-authentication tokens that move between Facebook and the online service during the sign-up or log-in phase.

As well,  Facebook has two fingers in the consumer online messaging space in the form of Facebook Messenger and WhatsApp products and both these services feature end-to-end encryption with WhatsApp having this feature enabled by default. Here, they want users to be sure that the messages during, say, a WhatsApp session stay encrypted even in the device’s RAM rather than just between devices and within the device’s non-volatile storage.

I see the Confidential Computing Consortium as underscoring a new vector within the data security concept with this vector representing the data that is in the computer’s memory while it is being processed. Here, it could be about establishing secure consent-driven access to data worked on during a computing session, including increased protection of highly-sensitive business and personal data.

Send to Kindle

Philips and DTS implement full network multiroom audio functionality in a TV set

Article – From the horse’s mouth

Philips TV image courtesy of Xperi

DTS Play-Fi has Philips as the first brand to offer a TV that is part of a network-based multiroom audio setup

XPeri (DTS)

DTS Play-Fi Arrives On TVs (Press Release)

My Comments

Over the last seven years, there have been a plethora of network-based multiroom audio platforms coming on board. Some of these, like Frontier’s UNDOK, Qualcomm’s AllPlay and DTS’s Play-Fi allow different manufacturers to join their ecosystems, thus allowing for a larger range of equipment in different form factors to be part of the equation. But these platforms only work with devices that use that same platform.

Well, how do I get sound from that 24-hour news channel or sports fixture that I am following on TV through the multiroom speaker in the kitchen with these platforms? Most of the platforms have at least one vendor who offers at least one home-theatre receiver or soundbar that connects to your TV using an HDMI-ARC, optical digital or analogue line-level connection. With these devices, they offer the ability  to stream the audio content that comes via those inputs in to the multiroom audio setup.

In this situation, you would have to have your TV on and tuned to the desired channel, offering its audio output via the soundbar or home-theatre system that has this technology for these setups to work. Then you would have to select the soundbar’s or home-theatre receiver’s “TV input” or “TV sound” as the source to have via your network multiroom audio setup’s speaker.

Bang & Olufsen, with their continual investment in their Master Control Link multiroom audio platform, even had the idea of TV sound in another room work out for that platform since the late 1980s. Here, most of their TV sets made since the late 80s could be set up as an audio endpoint for their multiroom system with the idea of having one’s favourite CD or radio station playing through the speakers built in to the TV installed in a secondary room. Or one could have the main TV “stream” the sound of a TV broadcast through a set of speakers installed in another room.

But DTS and Philips worked together to put full network multiroom audio in to a range of TV sets sold under the Philips name. This feature will initially appear in their 2020-model OLED premium “main-living-area” TVs.

Most of us will remember Philips as an innovative Dutch consumer-electronics brand that has existed over the many years. This is what with their name behind the audio cassette tape that effectively drove the 1970s and 1980s along with optical-disc technology such as the CD. But Philips devolved themselves of the consumer-electronics scene and had Funai, a Japanese consumer-electronics concern, continue to carry the flag in that market since 2013. This is due to a highly-saturated market when it comes to value-priced consumer electronics.

What will it offer? The TV can be a client device for online services and local content sources able to be streamed via the DTS Play-Fi platform. It will include the ability to show up metadata about the content you are listening to on the TV screen. There will even be the ability to have graphically-rich metadata like album art, artist photos or station logos on the TV screen, making more use of that display surface.

You may think that a TV isn’t an ideal audio endpoint for regular music listening from an audio source, what with integral speakers not suited to hi-fi sound or the screen being lit up and showing information about that source. But some of us do listen to music that way if there isn’t a music system. A common example would be listening to radio or a music channel in a hotel room through that room’s TV thanks to digital-TV or “radio-via-TV” setups that hotels provide. Similarly, some of us who haven’t got a separate music system to play CDs on have resorted to using a DVD player to play our CDs through the TV’s speakers.

On the other hand, the TV can be a source device for a Play-Fi device or logical group. This means that audio associated with the video content can emanate through a Play-Fi client device like a speaker. This means that you could have a Play-Fi speaker in your kitchen playing the sound from the sporting fixture that matters on the TV, typically by you using your Play-Fi app to “direct” the TV sound from your Philips TV to the Play-Fi speaker or the logical group it is a member of.

DTS even uses a special mobile-platform app which effectively turns your iOS or Android mobile device in to a Play-Fi client device that you use with your existing headphones connected to that device. This could avoid the need to set up, use and be within range of a Bluetooth transmitter adaptor plugged in to your TV for wireless headphone functionality. As well, with that setup, you could even be anywhere within coverage of your home network’s Wi-Fi for this to work.

I see this as a chance for any network-based multiroom platform who has a TV vendor “on its books” to draw out the idea of integrating the TV set as a legitimate member device class on their platform. This is whether it is a client audio device with a graphically-rich user interface or as a source device with access to audio from a connected video device, the set’s onboard broadcast-TV tuner or a connected-TV service viewed through its smart-TV functionality. In the context of smart TV / set-top box applications, it could be about having integration with one or more network multiroom audio platforms as a legitimate functionality case for these devices.

It would be very similar to what is happening with the Frontier Smart UNDOK network multi-room audio platform. This is where a significant number of member companies for that platform are offering Internet radio devices as part of their device lineup where most of them have FM and or DAB+ broadcast-radio functionality with some units having integrated CD players. Here, the UNDOK platform is allowing a user to listen to broadcast radio or CDs played on one of these devices through one or more other platform-member devices that are on the same home network in lieu of listening to online sources through these devices. A similar approach has also been undertaken for the Qualcomm AllPlay platform with Panasonic having AllPlay-compliant stereo systems equipped with broadcast-radio or CD functionality streaming the sound from a CD or radio station to other Qualcomm AllPlay-compliant network multiroom speakers on your home network.

What is being underscored here is that a network-based multiroom audio setup doesn’t have to be about listening to online audio content. Instead it is also about making legacy audio content available around the house through your home network.

Send to Kindle

Google fact-checking now applies to image searches

Articles

Google search about Dan Andrews - Chrome browser in Windows 10

Google to add fact checking to images in its search user interfaces

Google adds a fact check feature for images | CNet

From the horse’s mouth

Google

Bringing fact check information to Google Images (Blog Post)

My Comments

Increasingly, images and video are being seen as integral to news coverage with most of us seeing them, especially photographs, of importance when corroborating a fact or news story.

But these are becoming weaponised to tell a different truth compared to what is actually captured by the camera. One way is to use the same or a similar image to corroborate a different fact, with this including the use of image-editing tools to doctor the image so it tells a different story.

I have covered this previously when talking about the use of reverse-image-search tools like Tineye or Google Image Search to verify the authenticity of an image and . It will be the same kind of feature that Google has enabled in its search interface when you “google” for something, or in its news-aggregation platforms.

Google is taking this further for people who search for images using their search tools. Here, they are adding images to their fact-check processes so it is easy to see whether an image has been used to corroborate questionable information. You will see a “fact-check” indicator near the image thumbnail and when you click or tap on the image for a larger view or more details, you will see some details about whether the image is true or not.

A similar feature appears on the YouTube platform for exhibiting details about the veracity of video content posted there. But this feature currently is available to users based in Brazil, India and the USA and I am not sure whether it will be available across all YouTube user interfaces, especially native clients for mobile and set-top platforms.

It is in addition to Alphabet, their parent company, offering a free tool to check whether an image has been doctored. This is because meddling with an image to constitute something else using something like Adobe Photoshop or GIMP is being seen as a way to convey a message that isn’t true. The tool, called Assembler, uses artificial intelligence and algorithms that detect particular forms of image manipulation to indicate the veracity of an image.

But I would also see the rise of tools that analyse audio and video material to identify deepfake activity, or video sites, podcast directories and the like using a range of tools to identify the authenticity of content made available through them. This may include “fact-check” labels with facts being verified by multiple newsrooms and universities; or the content checked for out-of-the-ordinary editing techniques. It can also include these sites and directories implementing a feedback loop so that users can have questionable content verified.

Send to Kindle

Wi-Fi EasyMesh acquires new features in its second release

Articles – From the horse’s mouth

Telstra Smarty Modem Generation 2 modem router press picture courtesy of Telstra

Telstra Smart Modem Generation 2 – the first carrier-supplied modem router to be certified as compatible with Wi-Fi EasyMesh

Wi-Fi Alliance

Wi-Fi CERTIFIED EasyMesh™ enables self-adapting Wi-Fi® (Press Release)

Wi-Fi CERTIFIED EasyMesh™ update: Added features for operator-managed home Wi-Fi® networks {The Beacon blog post)

Technicolor

white-label manufacturer of carrier-supplied home-network modem routers

EasyMesh R2 Will Intelligently Manage Your Home Wi-Fi (Press Release)

Previous Coverage on HomeNetworking01.info about Wi-Fi EasyMesh

Wi-Fi defines a new standard for distributed wireless netowrks

Telstra is the first telco to supply home-network hardware that supports Wi-Fi EasyMesh

My Comments

The Wi-Fi EasyMesh standard that facilitates a distributed-Wi-Fi network without the need to have all equipment from the same equipment or chipset vendor has undergone a major revision. This revision, known as Release 2, is intended to improve network management, adaptability and security as well as supporting proper VLAN / multiple-ESSID operations that is especially required with guest, hotspot and community Wi-Fi applications.

What will Release 2 offer and how will it improve Wi-Fi EasyMesh?

Standardisation of diagnostic information sharing across the network

Wi-Fi EasyMesh Release 2 will make use of the Wi-Fi Data Elements to allow the Controller device to collect statistics and diagnostic information from each access point in a uniform manner. It doesn’t matter which vendors the different equipment in the EasyMesh-compliant Wi-Fi network come from.

Here, it will benefit companies like telcos, ISPs or IT support contractors in identifying where the weaknesses are in a Wi-Fi network that they provide support for. For those of us who support our own networks, we can use the tools provided with the main Wi-Fi router to identify what is going wrong with the setup.

Improved Wi-Fi radio channel management to assure service continuity

The second release of Wi-Fi EasyMesh will offer improved channel management and auto-tuning of the access point radio transceivers. This will make sure that the Wi-Fi network is able to adapt to new changes such as newer networks being setup nearby.

It wll also be about implementing DFS to make sure that Wi-Fi networks that use the 5 GHz bands are working as good neighbours to radar installations like weather radar located nearby and using those bands. This will happen not just on initial setup of any Wi-Fi EasyMesh node but continually which will be of concern when, for example, a local meteorological authority installs a new radar-based weather station in your neighbourhood.

Increased data security for the wireless backhaul

The wireless backhaul for a Wi-Fi EasyMesh R2 network will be more secure through the use of current Wi-Fi data-security protocols like Simultaneous Authentication Of Equals. There will even be the ability to support robust authentication mechanisms and newer stronger cryptographic protocols.

It is seen as necessary because the wireless backhaul is used as the main artery to convey all the network’s traffic between the access points and the main “edge” router. This can appeal to anyone who wishes to snoop on a user’s Internet traffic; and also conveys the fact that the Wi-Fi EasyMesh network is effectively a single LAN segment where all the data for Wi-Fi client devices moves around.

Secure wireless-backhaul support for VLAN-separated data traffic

Increasingly, home-network equipment is implementing VLAN technology for a range of reasons. One of these is to facilitate triple-play services and assure quality-of-service for IPTV and IP-based telephony services offered by the telco or ISP. The other is to facilitate guest/hotspot and community networks that use the same Internet service connection but are effectively isolated from the main home or small-business network.

This release of the Wi-Fi EasyMesh standard will support these setups by configuring each node to support the multiple virtual networks including their own separate extended-service-set configurations. The wireless backhaul will also be set up to create separate “traffic lanes” for each logical network that are securely isolated from each other.

Enhanced client steering

There will be the ability to steer client devices between access points, wavebands or channels to prevent one or more of these resources from being overloaded.

For example, it could be feasible to have dual-band client devices like most laptops, tablets and smartphones work on the 5GHz band if they are dealing with multimedia while keeping the 2.4GHz band for low-traffic needs and single-band devices. Similarly, if a client device “sees” two access points equally, it could be made to use whichever one isn’t being overloaded or has the batter throughput.

Of course, the enhanced client steering will provide a seamless roaming experience similar to what happens with the cellular-based mobile telephony/broadband networks that power our smartphones. This is a feature that is of importance with any device that is highly-portable in nature like a smartphone, tablet or laptop.

Key issues that may surface with Wi-Fi EasyMesh

A key issue that may crop up with Wi-Fi EasyMesh is supporting the use of multiple backhauls across the same network and offering “true-mesh” operation rather than hub-and-spoke operation. Here, it could be about opening up options for load-balancing and increased throughput for the backhaul or providing fault-tolerance for the network.

As well, the idea of a wired backhaul implementing IEEE 1905.1 small-network management technology has to be kept in scope when designing Wi-Fi EasyMesh devices or promoting and implementing this standard. This is more so to encourage HomePlug AV2 or G.Hn powerline-network technology as a companion “wired no-new-wires” backhaul approach for deploying satellite nodes in areas where a wireless backhaul may not perform to expectation but it would be costly or unfeasible to pull Ethernet cable across the premises.

How can this be deployed with existing Wi-Fi EasyMesh networks

There are measures built in to the Release 2 specifications to permit backward compatibility with legacy Wi-Fi EasyMesh network-infrastructure devices like the Telstra Smart Modem Generation 2 that exist in the network.

As well, some vendors are taking the approach of implementing the Release 2 functionality as software form. This makes it feasible for them to bake this functionality in to a firmware update for an existing EasyMesh-compliant router or access point without the need to worry about the device’s underlying hardware.

Conclusion

I see Wi-Fi EasyMesh Release 2 as offering the chance for Wi-Fi EasyMesh to mature as a standard for distributed-Wi-Fi setups within the home and small-business user space. This release may even make it affordable for small businesses to dabble with a basic managed distributed-Wi-Fi setup due to not being required to stay with a particular vendor/

Send to Kindle

Apple to use the ARM microarchitecture in newer Mac computers

Article

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

The Apple Mac platform is to move towards apple’s own silicon that uses ARM RISC microarchitecture

It’s Official: The Mac Is Transitioning to Apple-Made Silicon | Gizmodo

My Comments

This week, Apple used its WWDC software developers’ conference to announce that the Macintosh regular-computer platform will move away from Intel’s silicon to their own ARM-based silicon. This is to bring that computing platform in to line with their iOS/iPadOS mobile computing platform, their tVOS Apple TV set-top platform and their Watch platform that uses Apple’s own silicon.

Here, this silicon will use the ARM RISC instruction-set microarchitecture rather than the x86/x64 architecture used with Intel silicon. But Apple is no stranger to moving the Macintosh computing platform between microarchitectures.

Initially this platform used Motorola 680×0/PowerPC silicon which used a Motorola RISC instruction set microarchitecture. This platform initially had more chops compared to Intel’s x86 platform especially when it came to graphics and multimedia. Then, when Apple realised that Intel offered cost-effective microprocessors using the x86-64 microarchitecture and had the same kind of multimedia prowess as the Motorola processors, they moved the Macintosh platform to the Intel silicon.

But Apple had to take initiatives to bring the MacOS and Mac application software over to this platform. This required them to supply software development tools to the software-development community to allow programs that they write to be compiled for both Motorola and Intel instruction sets. They also furnished an instruction-set translator or “cross-compiler” called Rosetta to Mac users who had Intel-based Macs so they can run extant software that was written for Motorola silicon.

For a few years, this caused some awkwardness with Mac users, especially those who were early adopters, due to either the availability of software natively compiled for Intel silicon. Or they were finding that their existing Motorola-native software was running too slowly on their Intel-based computers thanks to the Rosetta instruction-set-translation software working between their program and the computer’s silicon.

Apple will be repeating this process in a very similar way to the initial Intel transition by the provision of software-development tools that build for Intel i86-64 based silicon and their own ARM-RISC based silicon. As well they will issue Rosetta2 which does the same job as the original Rosetta but translate i86-64 CISC machine instructions to the ARM RISC instruction set that their own silicon uses. Rosetta2 will be part of the next major version of MacOS which will be known as Big Sur.

The question that will be raised amongst developers and users of high-resource-load software like games or engineering software is what impact this conversion will have on that level of software. Typically most games are issued for the main games consoles and Windows-driven Intel-architecture PCs over Macs or tvOS-based Apple TV set-top devices, with software ports for these platforms coming later on in the software’s evolution.

There is an expectation that the Rosetta2 “cross-compiler” software could work this kind of software properly to a point that it can satisfactorily perform on a computer using integrated graphics infrastructure and working at Full HD resolution. Then there will be the issue of making sure it works with a Mac that uses discrete graphics infrastructure and higher display resolutions, thus giving the MacOS platform some “gaming chops”.

I see the rise of ARM RISC silicon in the tradition regular computing world and having it exist alongside classic Intel-based silicon in this computing space like what is happening with Apple and Microsoft as a challenge for computer software development. It is although some work has taken place within the UNIX / Linux space to facilitate the development of software for multiple computer types thus leading to this space bringing forth the open-source and shared-source software movements. This is more so with Microsoft where there is an expectation to have Intel-based silicon and ARM-based silicon exist alongside each other for the life of a common desktop computing platform, with each silicon type serving particular use cases.

Send to Kindle

What can be done about taming political rhetoric on online services?

Article

Australian House of Representatives ballot box - press picture courtesy of Australian Electoral Commission

Online services may have to observe similar rules to traditional media and postal services when it comes to handling election and referendum campaigns

There’s a simple way to reduce extreme political rhetoric on Facebook and Twitter | FastCompany

My Comments

In this day and age, a key issue that is being raised regarding the management of elections and referenda is the existence of extreme political rhetoric on social media and other online services.

But the main cause of this problem is the algorithmic nature associated with most online services. This can affect what appears in a user’s default news feed when they start a Facebook, Twitter or Instagram session; whether a bulk-distributed email ends up in the user’s email inbox or spam folder; whether the advertising associated with a campaign appears in search-driven or display online advertising; or if the link appears on the first page of a search-engine user experience.

This is compared to what happens with traditional media or postal services while there is an election or referendum. In most of the democracies around the world, there are regulations overseen by the electoral-oversight, broadcasting and postal authorities regarding equal access to airtime, media space and the postal system by candidates or political parties in an election or organisations defending each option available in a referendum. If the medium or platform isn’t regulated by the government such as what happens with out-of-home advertising or print media, the peak bodies associated with that space establish equal lowest-cost access to these platforms through various policies.

Examples of this include an equal number of TV or radio commercial spots made available at the cheapest advertising rate for candidates or political parties contesting a poll, including the same level of access to prime-time advertising spaces; scheduled broadcast debates or policy statements on free-to-air TV with equal access for candidates; or the postal service guaranteeing priority throughput of election matter for each contestant at the same low cost.

These regulations or policies are to make it hard for a candidate, political party or similar organisation to “game” the system but allow voters to make an informed choice about whom or what they vote for. But the algorithmic approach associated with the online services doesn’t guarantee the candidates equal access to the voters’ eyeballs thus requiring the creation of incendiary content that can go viral and be shared amongst many people.

What needs to happen is that online services have to establish a set of policies regarding advertising and editorial content tendered by candidates, political parties and allied organisations in order to guarantee equal delivery of the content.  This means marking such content so as to gain equal rotation in an online-advertising platform; using “override markers” that provide guaranteed recorded delivery of election matter to one’s email inbox or masking interaction details associated with election matter posted on a Facebook news feed.

But the most important requirement is that the online platforms cannot censor or interfere with the editorial content of the message that is being delivered to the voters by them. It is being seen as important especially in a hyper-partisan USA where it is perceived by conservative thinkers that Silicon Valley is imposing Northern-Californian / Bay-Area values upon people who use or publish through their online services.

A question that can easily crop up is the delivery of election matter beyond the jurisdiction that is affected by the poll. Internet-based platforms can make this very feasible and it may be considered of importance for, say, a country’s expats who want to cast their vote in their homeland’s elections. But people who don’t live within or have ties to the affected jurisdiction may see it as material of little value if there is a requirement to provide electoral material beyond a jurisdiction’s borders. This could be answered through social-media and email users, or online publishers having configurable options to receive and show material from multiple jurisdictions rather than the end-user’s current jurisdiction.

What is being realised here is that online services will need to take a leaf out of traditional regulated media and communication’s playbook to guarantee election candidates’ fair equal access to the voters through these platforms.

Send to Kindle