Category: Computer Hardware Design

Why do I see Thunderbolt 3 and integrated graphics as a valid option set for laptops?

Dell XPS 13 8th Generation Ultrabook at QT Melbourne rooftop bar

The Dell XPS 13 series of ultraportable computers uses a combination of Intel integrated graphics and Thunderbolt 3 USB-C ports

Increasingly, laptop users want to make sure their computers earn their keep for computing activities that are performed away from their home or office. But they also want the ability to do some computer activities that demand more from these machines like playing advanced games or editing photos and videos.

What is this about?

Integrated graphics infrastructure like the Intel UHD and Iris Plus GPUs allows your laptop computers to run for a long time on their own batteries. It is thanks to the infrastructure using the system RAM to “paint” the images you see on the screen, along with being optimised for low-power mobile use. This is more so if the computer is equipped with a screen resolution of not more than the equivalent of Full HD (1080p) which also doesn’t put much strain on the computer’s battery capacity.

They may be seen as being suitable for day-to-day computing tasks like Web browsing, email or word-processing or lightweight multimedia and gaming activities while on the road. Even some games developers are working on capable playable video games that are optimised to run on integrated graphics infrastructure so you can play them on modest computer equipment or to while away a long journey.

There are some “everyday-use” laptop computers that are equipped with a discrete graphics processor along with the integrated graphics, with the host computer implementing automatic GPU-switching for energy efficiency. Typically the graphics processor doesn’t really offer much for performance-grade computing because it is a modest mobile-grade unit but may provide some “pep” for some games and multimedia tasks.

Thunderbolt 3 connection on a Dell XPS 13 2-in-1

But if your laptop has at least one Thunderbolt 3 USB-C port along with the integrated graphics infrastructure, it will open up another option. Here, you could use an external graphics module, also known as an eGPU unit, to add high-performance dedicated graphics to your computer while you are at home or the office. As well, these devices provide charging power for your laptop which, in most cases, would relegate the laptop’s supplied AC adaptor as an “on-the-road” or secondary charging option.

A use case often cited for this kind of setup is a university student who is studying on campus and wants to use the laptop in the library to do their studies or take notes during classes. They then want to head home, whether it is at student accommodation like a dorm / residence hall on the campus, an apartment or house that is shared by a group of students, or their parents’ home where it is within a short affordable commute from the campus. The use case typifies the idea of the computer being able to support gaming as a rest-and-recreation activity at home after all of what they need to do is done.

Razer Blade gaming Ultrabook connected to Razer Core external graphics module - press picture courtesy of Razer

Razer Core external graphics module with Razer Blade gaming laptop

Here, the idea is to use the external graphics module with the computer and a large-screen monitor have the graphics power come in to play during a video game. As well, if the external graphics module is portable enough, it may be about connecting the laptop to a large-screen TV installed in a common lounge area at their accommodation on an ad-hoc basis so they benefit from that large screen when playing a game or watching multimedia content.

The advantage in this use case would be to have the computer affordable enough for a student at their current point in life thanks to it not being kitted out with a dedicated graphics processor that may be seen as being hopeless. But the student can save towards an external graphics module of their choice and get that at a later time when they see fit. In some cases, it may be about using a “fit-for-purpose” graphics card like an NVIDIA Quadro with the eGPU if they maintain interest in that architecture or multimedia course.

It also extends to business users and multimedia producers who prefer to use a highly-portable laptop “on the road” but use an external graphics module “at base” for those activities that need extra graphics power. Examples of these include to render video projects or to play a more-demanding game as part of rest and relaxation.

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module press picture courtesy of Sonnet Systems

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables

There are a few small external graphics modules that are provided with a soldered-in graphics processor chip. These units, like the Sonnet Breakaway Puck, are small enough to pack in your laptop bag, briefcase or backpack and can be seen as an opportunity to provide “improved graphics performance” when near AC power. There will be some limitations with these devices like a graphics processor that is modest by “desktop gaming rig” or “certified workstation” standards; or having reduced connectivity for extra peripherals. But they will put a bit of “pep” in to your laptop’s graphics performance at least.

Some of these small external graphics modules would have come about as a way to dodge the “crypto gold rush” where traditional desktop-grade graphics cards were very scarce and expensive. This was due to them being used as part of cryptocurrency mining rigs to facilitate the “mining” of Bitcoin or Ethereum during that “gold rush”. The idea behind these external graphics modules was to offer enhanced graphics performance for those of us who wanted to play games or engage in multimedia editing rather than mine Bitcoin.

Who is heading down this path?

At the moment, most computer manufacturers are configuring a significant number of Intel-powered ultraportable computers along these lines i.e. with Intel integrated graphics and at least one Thunderbolt 3 port. A good example of this are the recent iterations of the Dell XPS 13 (purchase here) and some of the Lenovo ThinkPad X1 family like the ThinkPad X1 Carbon.

Of course some of the computer manufacturers are also offering laptop configurations with modest-spec discrete graphics silicon along with the integrated-graphics silicon and a Thunderbolt 3 port. This is typically pitched towards premium 15” computers including some slimline systems but these graphics processors may not put up much when it comes to graphics performance. In this case, they are most likely to be equivalent in performance to a current-spec baseline desktop graphics card.

The Thunderbolt 3 port on these systems would be about using something like a “card-cage” external graphics module with a high-performance desktop-grade graphics card to get more out of your games or advanced applications.

Trends affecting this configuration

The upcoming USB4 specification is meant to be able to bring Thunderbolt 3 capability to non-Intel silicon thanks to Intel assigning the intellectual property associated with Thunderbolt 3 to the USB Implementers Forum.

As well, Intel has put forward the next iteration of the Thunderbolt specification in the form of Thunderbolt 4. It is more of an evolutionary revision in relationship to USB4 and Thunderbolt 3 and will be part of their next iteration of their Core silicon. But it is also intended to be backwards compatible with these prior standards and uses the USB-C connector.

What can be done to further legitimise Thunderbolt 3 / USB4 and integrated graphics as a valid laptop configuration?

What needs to happen is that the use case for external graphics modules needs to be demonstrated with USB4 and subsequent technology. As well, this kind of setup needs to appear on AMD-equipped computers as well as devices that use silicon based on ARM microarchitecture, along with Intel-based devices.

Personally, I would like to see the Thunderbolt 3 or USB4 technology being made available to more of the popularly-priced laptops made available to householders and small businesses. It would be with an ideal to allow the computer’s user to upgrade towards better graphics at a later date by purchasing an external graphics module.

This is in addition to a wide range of external graphics modules available for these computers with some capable units being offered at affordable price points. I would also like to see more of the likes of the Lenovo Legion BoostStation “card-cage” external graphics module that have the ability for users to install storage devices like hard disks or solid-state drives in addition to the graphics card. Here, these would please those of us who want extra “offload” storage or a “scratch disk” just for use at their workspace. They would also help people who are moving from the traditional desktop computer to a workspace centred around a laptop.

Conclusion

The validity of a laptop computer being equipped with a Thunderbolt 3 or similar port and an integrated graphics chipset is to be recognised. This is more so where the viability of improving on one of these systems using an external graphics module that has a fit-for-purpose dedicated graphics chipset can be considered.

Send to Kindle

Do I see regular computing target i86 and ARM microarchitectures?

Lenovo Yoga 5G convertible notebook press image courtesy of Lenovo

Lenovo Flex 5G / Yoga 5G convertible notebook which runs Windows on Qualcomm ARM silicon – the first laptop computer to have 5G mobile broadband on board

Increasingly, regular computers are moving towards the idea of having processor power based around either classic Intel (i86/i64) or ARM RISC microarchitectures. This is being driven by the idea of portable computers heading towards the latter microarchitecture as a power-efficiency measure with this concept driven by its success with smartphones and tablets.

It is undertaking a different approach to designing silicon, especially RISC-based silicon, where different entities are involved in design and manufacturing. Previously, Motorola was taking the same approach as Intel and other silicon vendors to designing and manufacturing their desktop-computing CPUs and graphics infrastructure. Now ARM have taken the approach of designing the microarchitecture themselves and other entities like Samsung and Qualcomm designing and fabricating the exact silicon for their devices.

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

Apple to move the Macintosh platform to their own ARM RISC silicon

A key driver of this is Microsoft with their Always Connected PC initiative which uses Qualcomm ARM silicon similar to what is used in a smartphone or tablet. This is to have the computer able to work on basic productivity tasks for a whole day without needing to be on AC power. Then Apple intended to pull away from Intel and use their own ARM-based silicon for their Macintosh regular computers, a symptom of them going back to the platform’s RISC roots but not in a monolithic manner.

As well, the Linux community have established Linux-based operating systems on ARM microarchitectore. This has led to Google running Android on ARM-based mobile and set-top devices and offering a Chromebook that uses ARM silicon; along with Apple implementing it in their operating systems. Not to mention the many NAS devices and other home-network hardware that implement ARM silicon.

Initially the RISC-based computing approach was about more sophisticated use cases like multimedia or “workstation-class” computing compared to basic word-processing and allied computing tasks. Think of the early Apple Macintosh computers, the Commodore Amiga with its many “demos” and games, or the RISC/UNIX workstations like the Sun SPARCStation that existed in the late 80s and early 90s. Now it is about power and thermal efficiency for a wide range of computing tasks, especially where portable or low-profile devices are concerned.

Software development

Already mobile and set-top devices use ARM silicon

I will see an expectation for computer operating systems and application software to be written and compiled for both classic Intel i86 and ARM RISC microarchitectures.  This will require software development tools to support compiling and debugging on both platforms and, perhaps, microarchitecture-agnostic application-programming approaches.  It is also driven by the use of ARM RISC microarchitecture on mobile and set-top/connected-TV computing environments with a desire to allow software developers to have software that is useable across all computing environments.

WD MyCloud EX4100 NAS press image courtesy of Western Digital

.. as do a significant number of NAS units like this WD MyCloud EX4100 NAS

Some software developers, usually small-time or bespoke-solution developers, will end up using “managed” software development environments like Microsoft’s .NET Framework or Java. These will allow the programmer to turn out a machine-executable file that is dependent on pre-installed run-time elements for it to run. These run-time elements will be installed in a manner that is specific to the host computer’s microarchitecture and make use of the host computer’s needs and capabilities. These environments may allow the software developer to “write once run anywhere” without knowing if the computer  the software is to run on uses an i86 or ARM microarchitecture.

There may also be an approach towards “one-machine two instruction-sets” software development environments to facilitate this kind of development where the goal is to simply turn out a fully-compiled executable file for both instruction sets.

It could be in an accepted form like run-time emulation or machine-code translation as what is used to allow MacOS or Windows to run extant software written for different microarchitectures. Or one may have to look at what went on with some early computer platforms like the Apple II where the use of a user-installable co-processor card with the required CPU would allow the computer to run software for another microarchitecture and platform.

Computer Hardware Vendors

For computer hardware vendors, there will be an expectation towards positioning ARM-based silicon towards high-performance power-efficient computing. This may be about highly-capable laptops that can do a wide range of computing tasks without running out of battery power too soon. Or “all-in-one” and low-profile desktop computers will gain increased legitimacy when it comes to high-performance computing while maintaining the svelte looks.

Personally, if ARM-based computing was to gain significant traction, it may have to be about Microsoft encouraging silicon vendors other than Qualcomm to offer ARM-based CPUs and graphics processors fit for “regular” computers. As well, Microsoft and the Linux community may have to look towards legitimising “performance-class” computing tasks like “core” gaming and workstation-class computing on that microarchitecture.

There may be the idea of using 64-bit i86 microarchitecture as a solution for focused high-performance work. This may be due to a large amount of high-performance software code written to run with the classic Intel and AMD silicon. It will most likely exist until a significant amount of high-performance software is written to run natively with ARM silicon.

Conclusion

Thanks to Apple and Microsoft heading towards ARM RISC microarchitecture, the computer hardware and software community will have to look at working with two different microarchitectures especially when it comes to regular computers.

Send to Kindle

Apple to use the ARM microarchitecture in newer Mac computers

Article

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

The Apple Mac platform is to move towards apple’s own silicon that uses ARM RISC microarchitecture

It’s Official: The Mac Is Transitioning to Apple-Made Silicon | Gizmodo

My Comments

This week, Apple used its WWDC software developers’ conference to announce that the Macintosh regular-computer platform will move away from Intel’s silicon to their own ARM-based silicon. This is to bring that computing platform in to line with their iOS/iPadOS mobile computing platform, their tVOS Apple TV set-top platform and their Watch platform that uses Apple’s own silicon.

Here, this silicon will use the ARM RISC instruction-set microarchitecture rather than the x86/x64 architecture used with Intel silicon. But Apple is no stranger to moving the Macintosh computing platform between microarchitectures.

Initially this platform used Motorola 680×0/PowerPC silicon which used a Motorola RISC instruction set microarchitecture. This platform initially had more chops compared to Intel’s x86 platform especially when it came to graphics and multimedia. Then, when Apple realised that Intel offered cost-effective microprocessors using the x86-64 microarchitecture and had the same kind of multimedia prowess as the Motorola processors, they moved the Macintosh platform to the Intel silicon.

But Apple had to take initiatives to bring the MacOS and Mac application software over to this platform. This required them to supply software development tools to the software-development community to allow programs that they write to be compiled for both Motorola and Intel instruction sets. They also furnished an instruction-set translator or “cross-compiler” called Rosetta to Mac users who had Intel-based Macs so they can run extant software that was written for Motorola silicon.

For a few years, this caused some awkwardness with Mac users, especially those who were early adopters, due to either the availability of software natively compiled for Intel silicon. Or they were finding that their existing Motorola-native software was running too slowly on their Intel-based computers thanks to the Rosetta instruction-set-translation software working between their program and the computer’s silicon.

Apple will be repeating this process in a very similar way to the initial Intel transition by the provision of software-development tools that build for Intel i86-64 based silicon and their own ARM-RISC based silicon. As well they will issue Rosetta2 which does the same job as the original Rosetta but translate i86-64 CISC machine instructions to the ARM RISC instruction set that their own silicon uses. Rosetta2 will be part of the next major version of MacOS which will be known as Big Sur.

The question that will be raised amongst developers and users of high-resource-load software like games or engineering software is what impact this conversion will have on that level of software. Typically most games are issued for the main games consoles and Windows-driven Intel-architecture PCs over Macs or tvOS-based Apple TV set-top devices, with software ports for these platforms coming later on in the software’s evolution.

There is an expectation that the Rosetta2 “cross-compiler” software could work this kind of software properly to a point that it can satisfactorily perform on a computer using integrated graphics infrastructure and working at Full HD resolution. Then there will be the issue of making sure it works with a Mac that uses discrete graphics infrastructure and higher display resolutions, thus giving the MacOS platform some “gaming chops”.

I see the rise of ARM RISC silicon in the tradition regular computing world and having it exist alongside classic Intel-based silicon in this computing space like what is happening with Apple and Microsoft as a challenge for computer software development. It is although some work has taken place within the UNIX / Linux space to facilitate the development of software for multiple computer types thus leading to this space bringing forth the open-source and shared-source software movements. This is more so with Microsoft where there is an expectation to have Intel-based silicon and ARM-based silicon exist alongside each other for the life of a common desktop computing platform, with each silicon type serving particular use cases.

Send to Kindle

WindowsCentral has identified a handful of portable external graphics modules for your Thunderbolt 3 laptop

Article

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module press picture courtesy of Sonnet Systems

Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables

Best Portable eGPUs in 2019 | WindowsCentral

From the horse’s mouth

Akitio

Node Pro (Product Page)

Gigabyte

Aorus Gaming Box (Product Page)

PowerColor

PowerColor Mini (Product Page)

Sonnet

Sonnet eGFX Breakaway Puck (Product Page)

My Comments

More of the Thunderbolt-3 external graphics modules are appearing on the scene but most of these units are primarily heavy units with plenty of connectivity on them. This is good if you wish to have this external graphics module as part of your main workspace / gaming space rather than something you will be likely to take with you as you travel with that Dell XPS 13 Ultrabook or MacBook Pro.

Dell XPS 13 9360 8th Generation clamshell Ultrabook

Dell XPS 13 9360 8th Generation clamshell Ultrabook – an example of an ultraportable computer that can benefit from one of the portable external graphics modules

Windows Central have called out a selection of these units that are particularly portable in design to allow for ease of transport. This will appeal to gamers and the like who have access to a large-screen TV in another room that they can plug video peripherals in to such as university students living in campus accommodation or a sharehouse. It can also appeal to those of us who want to use the laptop’s screen with a dedicated graphics processor such as to edit and render video footage they have captured or play a game with best video performance.

Most of the portable external graphics modules will be embedded with a particular graphics chipset and a known amount of display memory. In most cases this will be a high-end mobile GPU which may be considered low-spec by desktop (gaming-rig) standards. There will also be reduced connectivity options especially with the smaller units but they will have enough power output to power most Thunderbolt-3-equipped Ultrabooks.

An exception that the article called out was the Akitio Node Pro which is a “card cage” that is similar in size to one of the new low-profile desktop computers. This unit also has a handle and a Thunderbolt-3 downstream connection for other peripherals based on this standard. It would need an active DisplayPort-HDMI adaptor or a display card equipped with at least one HDMI port to connect to the typical large-screen TV set.

Most of the very small units or units positioned at the cheap end of the market would excel at 1080p (Full HD) graphics work. This would be realistic for most flatscreen TVs that are in use as secondary TVs or to use the laptop’s own screen if you stick the the advice to specify Full HD (1080p) as a way to conserve battery power on your laptop.

The exception in this roundup of portable external graphics modules was the AORUS Gaming Box which is kitted out with the NVIDIA GeForce GTX 1070 graphics chipset. This would be consided a high-performance unit.

Here, these portable external graphics modules are being identified as being something of use where you are likely to take them between locations but don’t mind compromising when it comes to functionality or capability.

It can also appeal to first-time buyers who don’t want to spend much on their first external graphics module to put a bit of “pep” in to their suitably-equipped laptop’s or all-in-one’s graphics performance. Then if you are thinking of using a better external graphics module, perhaps a “card-cage” variety that can work with high-performance “gaming-rig” or “desktop-workstation” cards, you can then keep one of these external graphics modules as something to use on the road for example.

Send to Kindle

USB 4.0 is to arrive as a local-connection standard

Articles

Thunderbolt 3 USB-C port on Dell XPS 13 Ultrabook

Thunderbolt 3 like on this Dell XPS 13 2-in-1 paves the way for USB 4

USB 4.0 to adopt Thunderbolt 3 with 40Gbps data transfer speeds | NeoWin

With USB 4, Thunderbolt and USB will converge | TechCrunch

USB 4 Debuts With Twice the Throughput and Thunderbolt 3 Support | Tom’s Hardware

From the horse’s mouth

USB Implementers’ Forum

USB Promoter Group Announces USB4 Specification (Press Release – PDF)

Intel

Intel Takes Steps To Enable Thunderbolt 3 Everywhere – Releases Protocol (Press Release)

My Comments

Intel and the ISB Implementer’s Forum have worked together towards the USB 4.0 specification. This will be primarily about an increased bandwidth version of USB that will also bake in Thunderbolt 3 technology for further-increased throughput.

USB 4.0 will offer twice the bandwidth of USB 3.1 thanks to more “data lanes”. This will lead to 40Gb throughput along the line. It will use the USB Type-C connector and will take a very similar approach to the USB 3.0 standard which relied on the older USB connection types like USB-A, where a “best-case” situation takes place regarding bandwidth but allowing for backward compatibility. There will also be the requirement to use higher-performance cables rated for this standard when connecting your host system to a peripheral device using this standard.

Opening up Thunderbolt 3

Intel is opening up Thunderbolt 3 with a royalty-free non-exclusive licensing regime. It is in addition to baking the Thunderbolt 3 circuitry in to their standard system-on-chip designs rather than requiring a particular “Alpine Ridge” interface chip to be used by both the host and peripheral. This will open up Thunderbolt 3 towards interface chipset designers and the like including the possibility of computing applications based on AMD or ARM-microarchitecture silicon to benefit from this technology.

This effort can make Thunderbolt-3-equipped computers and peripherals more affordable and can open this standard towards newer use cases. For example, handheld games consoles, mobile-platform tablets or ultraportable “Always Connected” laptops could benefit from features like external graphics moduies. It may also benefit people who build their own computer systems such as “gaming rigs” by allowing Thunderbolt 3 to appear in affordable high-performance motherboards and expansion cards, including “pure-retrofit” cards that aren’t dependent on any other particular circuitry on the motherboard.

It is also about integrating the Thunderbolt specification in to the USB 4 specification as a “superhighway” option rather than calling it a separate feature. As well, Thunderbolt 3 and the USB 4 specification can be the subject of increased innovation and cost-effective hardware.

Where to initially

Initially I would see USB 4.0 appear in “system-expansion” applications like docks or external-graphics modules, perhaps also in “direct-attached-storage” applications which are USB-connected high-performance hard-disk subsystems. Of course it will lead towards the possibility of a laptop, all-in-one or low-profile computer being connected to an “extended-functionality” module with dedicated high-performance graphics, space for hard disks or solid-state storage, perhaps an optical drive amongst other things.

Another use case that would be highlighted is virtual reality and augmented reality where you are dealing with headsets that have many sensors and integrated display and audio technology. They would typically be hooked up to computer devices including devices the size of the early-generation Walkman cassette players that you wear on you or even the size of a smartphone. It is more so with the rise of ultra-small “next-unit-of-computing” devices which pack typically desktop computer power in a highly-compact housing.

Of course, this technology will roll out initially as a product differentiator for newer premium equipment that will be preferred by those wanting “cutting-edge” technology. Then it will appear to a wider usage base as more chipsets with this technology appear and are turned out in quantity.

Expect the USB 4.0 standard to be seen as evolutionary as more data moves quickly along these lines.

Send to Kindle

The MicroSD card undergoes evolutionary changes

Articles

1TB microSD cards will boost the storage of your device, if you can afford it | TechRadar

From the horse’s mouth

SD Association

microSD Express – The Fastest Memory Card For Mobile Devices (PDF – Press Release)

Video – Click or tap to play

My Comments

The microSD card which is used as a removeable storage option in most “open-frame” smartphones and tablets and increasingly being used in laptops has gained two significant improvements at this year’s Mobile World Congress in Barcelona.

The first of these improvements is the launching of microSD cards that can store 1 terabyte of data. Micron pitched the first of these devices while SanDisk, owned by Western Digital and also a strong player with the SD Card format, offered their 1Tb microSD card which is the fastest microSDXC card at this capacity.

The new SD Express card specification, part of the SD 7.1 Specification, provides a “best-case high-throughput” connection based on the same interface technology used in a regular computer for fixed storage or expansion cards. The microSD Express variant which is the second improvement launched at this same show takes the SD Express card specification to the microSD card size.

The SD Express specification, now applying to the microSD card size, achieves a level of backward compatibility for host devices implementing orthodox SD-card interfaces. This is achieved through a set of electrical contacts on the card for PCI Express and NVMe interfaces along with the legacy SD Card contacts, with the interfacing to the storage silicon taking place in the card.

As well, there isn’t the need to create a specific host-interface chipset for SD card use if the application is to expressly use this technology and it still has the easy upgradeability associated with the SD card. But most SD Express applications will also have the SD card interface chipset to support the SD cards that are in circulation.

This will lead to the idea of fast high-capacity compact removeable solid-state storage for a wide range of computing applications especially where size matters. This doesn’t matter whether the finished product has a smaller volume or to have a higher effective circuit density leading to more functionality within the same physical “box”.

One use case that was pitched is the idea of laptops or tablets, especially ultraportable designs, implementing this technology as a primary storage. Here, the microSD Express cards don’t take up the same space as the traditional SATA or M2 solid-state storage devices. There is also the ability for users to easily upsize their computers’ storage capacity to suit their current needs, especially if they bought the cheapest model with the lowest amount of storage.

Photography and videography will be another key use case especially when the images or footage being taken are of a 4K UHDTV or higher resolution and/or have high dynamic range. It will also be of benefit for highly-compact camera applications like “GoPro-style” action cams or drone-mount cameras. It will also benefit advanced photography and videography applications like 360-degree videos.

Another strong use case that is being pitched is virtual-reality and augmented-reality technology where there will be the dependence on computing power within a headset are a small lightweight pack attached to the headset. Here, the idea would be to have the headset and any accessories able to be comfortably worn by the end-user while they engage in virtual-reality.

Some of the press coverage talked about use of a 1Tb SD card in a Nintendo Switch handheld games console and described it as being fanciful for that particular console. But this technology could have appeal for newer-generation handheld games consoles especially where these consoles are used for epic-grade games.

Another interesting use case would be automotive applications, whether on an OEM basis supplied by the vehicle builder or an aftermarket basis installed by the vehicle owner. This could range from a large quantity of high-quality audio content available to use, large mapping areas or support for many apps and their data.

The microSD card improvements will be at the “early-adopter” stage where they will be very expensive and have limited appeal. As well, there may need to be a few bugs ironed out regarding their design or implementation while other SD-card manufacturers come on board and offer more of these cards.

At the moment, there aren’t the devices or SD card adaptors that take advantage of SD Express technology but this will have to happen as new silicon and finished devices come on to the scene. USB adaptors that support SD Express would need to have the same kind of circuitry as a portable hard drive along with USB 3.1 or USB Type-C technology to support “best case” operation with existing host devices.

This technology could become a game-changer for removeable or semi-removeable storage media applications across a range of portable computing devices.

Send to Kindle

European Union’s data security actions come closer

Article

Map of Europe By User:mjchael by using preliminary work of maix¿? [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons

The European Union will make steps towards a secure-by-design approach for hardware, software and services

EU Cybersecurity Act Agreed – “Traffic Light” Labelling Creeps Closer | Computer Business Review

Smarthome: EU führt Sicherheitszertifikate für vernetzte Geräte ein | Computer Bild (German Language / Deutschen Sprache)

From the horse’s mouth

European Commission

EU negotiators agree on strengthening Europe’s cybersecurity (Press Release)

My Comments

After the GDPR effort for data protection and end-user privacy with our online life, the European Union want to take further action regarding data security. But this time it is about achieving a “secure by design” approach for connected devices, software and online services.

This is driven by the recent Wannacry and NotPetya cyberattacks and is being achieved through the Cybersecurity Act which is being passed through the European Parliament. It follows after the German Federal Government’s effort to specify a design standard for routers that we use as the network-Internet “edge” for our home networks.

There will be a wider remit for EU Agency for Cybersecurity (ENSA) concerning cybersecurity issues that affect the European Union. But the key issue here is to have a European-Union-based framework for cybersecurity certification, which will affect online services and consumer devices with this certification valid through the EU. It is an internal-market legislation that affects the security of connected products including the Internet Of Things, as well as critical infrastructure and online services.

The certification framework will be about having the products being “secure-by-design” which is an analogy to a similar concept in building and urban design where there is a goal to harden a development or neighbourhood against crime as part of the design process. In the IT case, this involves using various logic processes and cyberdefences to make it harder to penetrate computer networks, endpoints and data.

It will also be about making it easier for people and businesses to choose equipment and services that are secure. The computer press were making an analogy to the “traffic-light” coding on food and drink packaging to encourage customers to choose healthier options.

-VP Andrus Ansip (Digital Single Market) – “In the digital environment, people as well as companies need to feel secure; it is the only way for them to take full advantage of Europe’s digital economy. Trust and security are fundamental for our Digital Single Market to work properly. This evening’s agreement on comprehensive certification for cybersecurity products and a stronger EU Cybersecurity Agency is another step on the path to its completion.”

What the European Union are doing could have implications beyond the European Economic Area. Here, the push for a “secure-by-design” approach could make things easier for people and organisations in and beyond that area to choose IT hardware, software and services satisfying these expectations thanks to reference standards or customer-facing indications that show compliance.

It will also raise the game towards higher data-security standards from hardware, software and services providers especially in the Internet-of-Things and network-infrastructure-device product classes.

Send to Kindle

How about the expansion docks with room for extra secondary storage

Sony VAIO Z Series and docking station

Like with this (Sony) VAIO Z Series ultraportable, an add-on module with integrated optical disk or other storage could add capabilities to today’s small-form-factor computers

A key trend affecting personal computing is for us to move away from the traditional three-piece desktop computer towards smaller form factors.

Here, the traditional desktop computer’s system unit was a large box that was about the size of a hi-fi component or a large tower. As well the smaller form factors we are heading towards are laptops / notebooks; ultra-small desktop computers of the Intel NUC ilk; or all-in-one

USB-C (also the physical connector for Thunderbolt 3)- the newer connection type that can make better use of add-on modules

which integrate the computing power with the display.

With these setups, it is assumed that we are moving away from on-board data storage in the form of hard disks or staying well clear of packaged media in the form of optical disks. This is driven by online software delivery and the use of streaming audio and video services.

Intel Skull Canyon NUC press picture courtesy of Intel

.. with this applying for small-factor desktops like the The Intel Skull Canyon NUCvideo services.

What was often valued about the traditional computer design was that there was extra space to house more storage devices like hard disks or optical drives or the ability to install high-performance graphics cards. This is why these form factors still exist in the form of high-performance “gaming-rig” computers where performance is more important and there is the likely of more data being held on these machines.

But for some of us, we will still want to maintain access to prior storage media types like optical disks or use high-performance graphics chipsets especially at home or our main workspace.  For example, the traditional optical discs are still valued when it comes to media in an always-accessible future-proof collectible form.

There is also the idea of maintaining a secondary hard disk as extra storage capacity specifically for data, whether as a backup or as an offload storage location. This is more so where you are dealing with laptop computers that are equipped with solid-state storage of up to 256Gb and there is a desire to keep most of your data that you aren’t working with somewhere else.

Laptop users often answered this need through the use of a “dock” or expansion module to connect a cluster of peripherals to a single box which has only one connection to the host laptop computer. But Thunderbolt 3 facilitated the rise of external graphics modules which add extra graphics horsepower to laptops and similar low-profile computers.

This concept can be taken further with USB-C or Thunderbolt 3 expansion docks that have integrated optical drives and/or mounting space for hard disks. These would present to the host as Mass Storage devices, using the operating-system class drivers for this kind of device. Of course there would be the expansion abilities for extra USB devices, as well as an Ethernet network interface and/or onboard USB audio chipset with own SP/DIF or analogue connections.

Video to the displays could be facilitated via DisplayPort alt or USB DisplayLink for devices not implementing an external graphics module functionality. In the latter situation, it is like “hotting up” a car for higher performance.

Of course they would have to be self-powered with a strong USB Power Delivery output for the host and USB peripherals. There could be research in to having USB ports head in to optimised charge-only mode when the host computer isn’t active for example.

Most of the onboard devices will be required to represent the devices according to standardised device classes. This will typically lead to a “plug-and-play” setup routine so you aren’t downloading extra software to run the devices if you use recent versions of the main operating systems.

Manufacturers could see these devices as something that complements their ultra-small desktop computer product lines. This is in an approach similar to how consumer hi-fi equipment, typically devices of a particular model range are designed and marketed. Here, the importance would be on having equipment that shares common styling or functional features but encouraging the ability to expand the ultra-small desktop computer at a later date.

The idea here is to allow users to adapt portable or small-form-factor computers to their needs as and when they see fit. It is as long as these computers implement USB 3.1 connections in Type-C form or, for faster throughput and support for external graphics modules, implement Thunderbolt 3 over USB-C connections.

Send to Kindle

My Experience with the USB-C connection type

Dell XPS 13 2-in-1 Ultrabook - USB-C power

USB-C as the power connection for a Dell XPS 13 2-in-1 Ultrabook

I have given a fair bit of space on HomeNetworking01.info to the USB-C host-peripheral connection type since it was launched. It was more to do with a simplified high-throughput high-reliability connection type that will grace our computers, smartphones and similar devices.

But just lately I had upgraded to a new Samsung Galaxy S8+ Android smartphone due to my previous smartphone failing. But I had some previous experience with the USB-C connection through my reviewing of the Dell XPS 13 2-in-1 convertible Ultrabook, which was powered using USB-C as its primary connection type. The previous Android smartphones that I had before implemented a USB microAB connection for their power and data-transfer needs and recent iterations of Android which I experienced on the Galaxy Note series of phones supported USB OTG host-operation modes.

USB-C connector on Samsung Galaxy S8 Plus smartphone

Samsung S8 Plus Android phone using USB-C connection for power and data

The main feature that I liked was the simple approach to connecting devices to my phone. Here, I didn’t have to worry about which way the cable plugged in to my phone, something that was important when it came to connecting it to a charger or power pack.

A situation I was previously encountering with the USB micro-B connector on the previous phones was the need to replace USB cables due to the USB micro-B plug wearing out in the USB micro-AB socket in these phones due to frequent connection and disconnection. This would be typical in relationship to connecting a phone up to a charger for charging then subsequently disconnecting it from the charger for regular use. Then I ended up buying replacement USB A to USB micro-B cables to remedy this problem.

Now I am ending up with a sure-fire connection experience for USB devices similar to using the regular USB connections commonly fitted to regular computers or peripherals.

That situation was often brought on through the use of leaf-spring-type lugs on the USB micro-B connector that were used to make sure the plug fitted properly in the common USB micro-AB socket fitted to smartphones. Here, they can easily wear out and lose their springiness through repeated use. The USB-C connector doesn’t make use of those leaf springs to secure the plug in the socket thanks to it being one plug design for data input and output.

Memory card reader connected to USB-C adaptor for Samsung Galaxy S8 Plus smartphone

USB-C also works for connecting this phone to a memory card reader for reading photos from my camera

Another benefit that I have experienced is the ability to use the same kind of connector whether the phone is to be a host to a peripheral or to be connected to another computer device. This avoids the need to worry about having to use a USB OTG cable if, for example, I wanted to use a photo from my camera’s SD card to post on Instagram. But I still needed to use a USB-A (female) to USB-C adaptor with the SD card reader but would find this useful if I wanted to use the SD card reader or a USB memory key with any USB-C host device.

Again, I wouldn’t need to worry about which way the cable plugged in to a computer or smartphone equipped with this connector. This can come in handy if I was dealing with USB memory keys attached to keyrings or USB peripherals hanging off a USB cable.

Personally, I see the USB Type-C connection appearing as a viable connection type for laptops, tablets and smartphones especially where these devices are designed to be slim.

One way this connection can be exploited further would be for smartphone manufacturers to install two USB Type-C connectors at the bottom of their products. Similarly, a USB battery pack with USB Type-C connectivity could have have three USB-C sockets and have USB hub functionality. This could then allow for multiple devices to be connected to the same host device.

This article will be built out further as I deal with more connection setups that are based around the USB Type-C connector.

Send to Kindle

What is happening with driver-free printing

What is driver-free printing?

HP OfficeJet 6700 Premium business inkjet multifunction printer

Driver-free printing like AirPrint allows for use of printers like this HP OfficeJet without the need to install drivers or extra software on host computers

This is to be able to use a printer with a host computing device without the need to install drivers or additional software on that device.

The current situation with most operating systems is that since the rise of page-based printers, you had to install additional driver software to get all the software on your computer to work with your printer.

This involves one having to know what make and model the printer was and how it was connected to the host device. Then one would be  downloading the software from the printer manufacturer’s Website or the computer platform’s app store and installing it on that computer or loading it from media supplied with the printer by the manufacturer.

Of course, how your printer connects to your computer or mobile device, be it through a USB cable, a Bluetooth link or a network is about the physical link to that printer. Most of the standards associated with these connection methods don’t provide support for driver-free printing.

Why is there an imperative for driver-free printing?

Mobile computing

You could print from a mobile-platform tablet like this Lenovo to a range of printers without installing lots of extra apps. Infact you can use Mopria to print from this Lenovo Android tablet driver-free.

A key imperative behind driver-free printing is the concept of mobile computing. It is about using highly-portable computing devices like laptops, smartphones and tablets for personal computing no matter wherever you are. This may include being able to use someone else’s printer or a public printing facility to get that document or photo printed there and then.

Similarly it can be about paying a service provider to perform advanced printing tasks such as bulk printing and document finishing for a small business or community organisation, or a photo lab to turn out a special photo as a large high-quality print on glossy paper.

Dedicated Computing Devices

Furthermore, it can be about the idea of providing a computing device, especially a dedicated computing device with printing abilities. A key application would be interactive TV supported by a smart-TV or set-top-box platform. In this scenario, a viewer could do something like print out a recipe from a cooking show that they view on demand just by using the remote control.

Business users may find that driver-free printing may benefit point-of-sales technology especially if they are dealing with pure-play devices like cash registers and payment-card terminals. As well, this class of device would benefit exceptionally due to the goal not to admit any more software than is necessary and having the requirement to use only previously-vetted software.

Similarly, it can also benefit the concept of complementary-capability printing amongst multiple printers by allowing one to, for example, make a colour copy using the scanning functionality of a monochrome laser multifunction printer and a pure-play colour printer,

Accessible Computing

In the case of accessible computing, some blind users are using PDA devices which use tactile data input similar to a Perkins Braille typewriter and voice or Braille tactile output. Here, these users want to yield information in hard-copy form for sighted users but these devices have the same software requirements as a dedicated computing device. Typically they would have to work according to common standards for driver-free printing.

Similar devices are being constructed to allow people to live a life independent of particular disabilities and these will benefit from driver-free hard-copy output.

Efforts that have taken place to achieve this goal

In the early days of personal computing, Epson used their ESC/P codes as a defacto standard for determining how dot-matrix impact printers format the characters they print if anything beyond ordinary ASCII text was required. This was effectively used by every manufacturer who offered dot-matrix and similar printers whether through licensing or emulation.

A similar situation took place with Adobe with PostScript and HP with PCL as common page-description languages for laser and inkjet page printers. Again, other manufacturers took this on with licensing or emulation of the various language-interpreter software for their products.

These standards fell away as GUI-based operating systems managed printing at the operating-system level rather than at the application level. This was underscored with some printer manufacturers working with Microsoft to push forward with GDI-based host-rasterised printing leading towards cost-effective printer designs.

There have been some initial efforts taking place for driver-free printing in particular application classes, especially where dedicated-function devices were involved. This was through the persistence of ESC/P and the ESC/POS derivative printer-control protocol within the point-of-sale receipt printer space, along with the use of PictBridge as a driver-free method for printing photos from consumer digital cameras.

Similarly some managed-business-printing and service-based-printing platforms implemented a “single-driver” approach for printing using these platforms. This was to achieve a goal towards one installable program needed to become part of the platform and print to any machine the user is authorised to print to regardless of make and model. But it didn’t really answer the need for true driver-free operation for a printing environment.

As the home network became more common and was seen as part of the home-entertainment technology sphere, the UPnP Forum and DLNA made attempts at driver-free printing as part of their standards. It was positioned as a way to allow, for example, Smart TVs, electronic picture frames and set-top boxes to yield hard-copy output of photos for example. HP were the only vendor whose mid-tier and premium consumer printers answered these standards as I have discovered while reviewing some of their products.

The Printer Working Group started working on IPP Everywhere as a way to achieve driver-free printing via the network or direct connections for both consumer and business applications. This even was about exposing printer capabilities and features without the need of adding in special software to do something like stapling or supporting PIN-driven secure job release.

One of the standard page-description languages specified for IPP Everywhere was the Adobe PDF format which is infact used for “download-to-print” situations. This is because it is seen as a file format that represents “electronic hard copy” and the common practice in the “download-to-print” use case is to prepare a document as a PDF file before making it available. The IPP Everywhere approach also included and defined a use case of “printing by reference” where the printer “fetches” the PDF document off the Web server via a known URL for printing rather than the user downloading it to their computing device in order to turn out a hard copy of it.

Apple iPad Pro 9.7 inch press picture courtesy of Apple

Most iPhones and iPads implement AirPrint to allow for driver-free mobile printing

Apple was the first to make a serious breakthrough for driver-free printing and the IPP Everywhere goal when they added AirPrint to the version 4.2 of the iOS platform. This was important for iOS due to the desire not to add any extra machine-specific code for particular printers since the iPad, iPhone and iPod Touch were mobile devices with constrained memory and storage space.

Google initially achieved something similar with their Google Cloud Print ecosystem which was being pitched for ChromeOS and Android. But this worked as a cloud-driven or hosted variation of print management solutions pitched at enterprises which offered a form of driverless or universal-driver printing to that user base.

But the Mopria Alliance have made a serious step closer with driverless printing by creating a network-based printing infrastructure for the Android platform. Google followed up the Cloud Print program with the Android Print Service software ecosystem which uses “plugins” that work in a same way to drivers. Here, the Mopria Alliance, founded by Canon, HP, Samsung and Xerox, worked towards a single plugin for driver-free printing and had these companies install firmware in their machines to present themselves across a logical network to Mopria-compliant hosts as well as process print jobs for these hosts.

What needs to happen

All printers that work with any network need to support AirPrint, IPP Everywhere and Mopria no matter what position they hold in a manufacturer’s product lineup. This will then incentivise the idea of driver-free network printing.

The IT industry also needs to investigate the use of device classes / profiles within the USB and Bluetooth standards to facilitate driver-free direct printing. This is because USB and Bluetooth are seen as connection types used for directly connecting a peripheral to a host computer device rather than connecting via a network. As well, driver-free direct printing could open up more use cases involving printing from dedicated-function devices.

Similarly, Microsoft needs to implement Mopria and/or IPP Everywhere in to Windows as part of a default print driver delivered with the desktop operating system. This would then allow for truly-portable printing from laptops, tablets and 2-in-1s running the Windows operating system.

Driver-free printing could come in to its own with interactive TV especially when you are dealing with cooking shows like MasterChef

A use case that needs to be put forward for driver-free printing is its relevance with interactive TV. In this case, it could be about watching a TV show whether linearly or on-demand, including watching content held on Blu-Ray discs and being able to, at a whim, print out resources relating to that show. Situations that can come up include printing a “white paper” associated with a public-affairs show or printing a recipe that was demonstrated in a cooking show. Even advertising could lead towards the ability for users to print out coupons in response to advertised specials, something that would be valued in the USA where clipping coupons for special deals is the norm; or complete a booking for an advertised event with the printer turning out the tickets. Such a concept can also extend to other “lean-back” apps offered on a smart-TV platform by providing a printing option to these apps.

But this would be about achieving a user experience that is about selecting the resource to print and instantiating the print job from a 10-foot “lean-back” user experience using a limited remote control. It would also include advertising the fact that printable resources exists for that show that you can print using the interactive-TV platform. Similarly, interactive-TV platforms like HBBTV, media-storage platforms like Blu-Ray, and smart-TV / set-top-box platforms like tvOS, Android TV or Samsung Smart Hub would need to support one or more of the driver-free printing platforms. In the case of tvOS, Apple could simply add AirPrint functionality to that set-top operating system so you could print from your Apple-TV-based setup.

The idea of driver-free printing will also be relevant to the smart home especially if it is desirable for devices therein to be able to provide hard copy on demand. For example, kitchen appliances that have access to online recipe libraries, an idea positioned by most of the big names in this field, may benefit from this feature because you could configure them to be set up for a particular recipe while your printer turns out the actual recipe with the ingredients list. But this concept will need to be driven by the use of “print by reference” standards for access to online resources.

As well, a driver-free printing setup should be able to recognise label and receipt printers in order to permit transaction-driven printing using these devices. For example, address labels could be turned out as a sheet of paper with all the labels on a regular printer or as a run of labels emerging from a label printer.

How could this affect printer design and product differentiation

The use of driver-free printing won’t deter printer manufacturers from improving their products’ output speed and quality. Infact, the use of standard page-description languages will lead towards the development of high-speed coprocessors and software that can quickly render print jobs sent to them in these formats.

There will also be a competitive emphasis on the number of functions available at a multifunction printer’s control panel with this being driven by app platforms maintained by the various printer manufacturers. Like with smart TVs, it could lead towards third parties including alliances developing app platforms for manufacturers who don’t want to invest in developing and maintaining an app platform.

Let’s not forget that printer manufacturers will maintain the “horses for courses” approach when it comes to designing printer models for both home and business use. But it will lead to an emphasis on refining the various product classes without needing to think about shoehorning driver and print-monitor software for the various host devices.

Conclusion

Once we see driver-free printing, it can lead towards simplified real plug-and-play printer setup for all kinds of users. Similarly it opens up printers towards a large class of device types beyond mobile and desktop computing devices.

Send to Kindle