USB-C as the power connection for a Dell XPS 13 2-in-1 Ultrabook
I have given a fair bit of space on HomeNetworking01.info to the USB-C host-peripheral connection type since it was launched. It was more to do with a simplified high-throughput high-reliability connection type that will grace our computers, smartphones and similar devices.
But just lately I had upgraded to a new Samsung Galaxy S8+ Android smartphone due to my previous smartphone failing. But I had some previous experience with the USB-C connection through my reviewing of the Dell XPS 13 2-in-1 convertible Ultrabook, which was powered using USB-C as its primary connection type. The previous Android smartphones that I had before implemented a USB microAB connection for their power and data-transfer needs and recent iterations of Android which I experienced on the Galaxy Note series of phones supported USB OTG host-operation modes.
Samsung S8 Plus Android phone using USB-C connection for power and data
The main feature that I liked was the simple approach to connecting devices to my phone. Here, I didn’t have to worry about which way the cable plugged in to my phone, something that was important when it came to connecting it to a charger or power pack.
A situation I was previously encountering with the USB micro-B connector on the previous phones was the need to replace USB cables due to the USB micro-B plug wearing out in the USB micro-AB socket in these phones due to frequent connection and disconnection. This would be typical in relationship to connecting a phone up to a charger for charging then subsequently disconnecting it from the charger for regular use. Then I ended up buying replacement USB A to USB micro-B cables to remedy this problem.
Now I am ending up with a sure-fire connection experience for USB devices similar to using the regular USB connections commonly fitted to regular computers or peripherals.
That situation was often brought on through the use of leaf-spring-type lugs on the USB micro-B connector that were used to make sure the plug fitted properly in the common USB micro-AB socket fitted to smartphones. Here, they can easily wear out and lose their springiness through repeated use. The USB-C connector doesn’t make use of those leaf springs to secure the plug in the socket thanks to it being one plug design for data input and output.
USB-C also works for connecting this phone to a memory card reader for reading photos from my camera
Another benefit that I have experienced is the ability to use the same kind of connector whether the phone is to be a host to a peripheral or to be connected to another computer device. This avoids the need to worry about having to use a USB OTG cable if, for example, I wanted to use a photo from my camera’s SD card to post on Instagram. But I still needed to use a USB-A (female) to USB-C adaptor with the SD card reader but would find this useful if I wanted to use the SD card reader or a USB memory key with any USB-C host device.
Again, I wouldn’t need to worry about which way the cable plugged in to a computer or smartphone equipped with this connector. This can come in handy if I was dealing with USB memory keys attached to keyrings or USB peripherals hanging off a USB cable.
Personally, I see the USB Type-C connection appearing as a viable connection type for laptops, tablets and smartphones especially where these devices are designed to be slim.
One way this connection can be exploited further would be for smartphone manufacturers to install two USB Type-C connectors at the bottom of their products. Similarly, a USB battery pack with USB Type-C connectivity could have have three USB-C sockets and have USB hub functionality. This could then allow for multiple devices to be connected to the same host device.
This article will be built out further as I deal with more connection setups that are based around the USB Type-C connector.
Driver-free printing like AirPrint allows for use of printers like this HP OfficeJet without the need to install drivers or extra software on host computers
This is to be able to use a printer with a host computing device without the need to install drivers or additional software on that device.
The current situation with most operating systems is that since the rise of page-based printers, you had to install additional driver software to get all the software on your computer to work with your printer.
This involves one having to know what make and model the printer was and how it was connected to the host device. Then one would be downloading the software from the printer manufacturer’s Website or the computer platform’s app store and installing it on that computer or loading it from media supplied with the printer by the manufacturer.
Of course, how your printer connects to your computer or mobile device, be it through a USB cable, a Bluetooth link or a network is about the physical link to that printer. Most of the standards associated with these connection methods don’t provide support for driver-free printing.
Why is there an imperative for driver-free printing?
You could print from a mobile-platform tablet like this Lenovo to a range of printers without installing lots of extra apps. Infact you can use Mopria to print from this Lenovo Android tablet driver-free.
A key imperative behind driver-free printing is the concept of mobile computing. It is about using highly-portable computing devices like laptops, smartphones and tablets for personal computing no matter wherever you are. This may include being able to use someone else’s printer or a public printing facility to get that document or photo printed there and then.
Similarly it can be about paying a service provider to perform advanced printing tasks such as bulk printing and document finishing for a small business or community organisation, or a photo lab to turn out a special photo as a large high-quality print on glossy paper.
Dedicated Computing Devices
Furthermore, it can be about the idea of providing a computing device, especially a dedicated computing device with printing abilities. A key application would be interactive TV supported by a smart-TV or set-top-box platform. In this scenario, a viewer could do something like print out a recipe from a cooking show that they view on demand just by using the remote control.
Business users may find that driver-free printing may benefit point-of-sales technology especially if they are dealing with pure-play devices like cash registers and payment-card terminals. As well, this class of device would benefit exceptionally due to the goal not to admit any more software than is necessary and having the requirement to use only previously-vetted software.
Similarly, it can also benefit the concept of complementary-capability printing amongst multiple printers by allowing one to, for example, make a colour copy using the scanning functionality of a monochrome laser multifunction printer and a pure-play colour printer,
In the case of accessible computing, some blind users are using PDA devices which use tactile data input similar to a Perkins Braille typewriter and voice or Braille tactile output. Here, these users want to yield information in hard-copy form for sighted users but these devices have the same software requirements as a dedicated computing device. Typically they would have to work according to common standards for driver-free printing.
Similar devices are being constructed to allow people to live a life independent of particular disabilities and these will benefit from driver-free hard-copy output.
Efforts that have taken place to achieve this goal
In the early days of personal computing, Epson used their ESC/P codes as a defacto standard for determining how dot-matrix impact printers format the characters they print if anything beyond ordinary ASCII text was required. This was effectively used by every manufacturer who offered dot-matrix and similar printers whether through licensing or emulation.
A similar situation took place with Adobe with PostScript and HP with PCL as common page-description languages for laser and inkjet page printers. Again, other manufacturers took this on with licensing or emulation of the various language-interpreter software for their products.
These standards fell away as GUI-based operating systems managed printing at the operating-system level rather than at the application level. This was underscored with some printer manufacturers working with Microsoft to push forward with GDI-based host-rasterised printing leading towards cost-effective printer designs.
There have been some initial efforts taking place for driver-free printing in particular application classes, especially where dedicated-function devices were involved. This was through the persistence of ESC/P and the ESC/POS derivative printer-control protocol within the point-of-sale receipt printer space, along with the use of PictBridge as a driver-free method for printing photos from consumer digital cameras.
Similarly some managed-business-printing and service-based-printing platforms implemented a “single-driver” approach for printing using these platforms. This was to achieve a goal towards one installable program needed to become part of the platform and print to any machine the user is authorised to print to regardless of make and model. But it didn’t really answer the need for true driver-free operation for a printing environment.
As the home network became more common and was seen as part of the home-entertainment technology sphere, the UPnP Forum and DLNA made attempts at driver-free printing as part of their standards. It was positioned as a way to allow, for example, Smart TVs, electronic picture frames and set-top boxes to yield hard-copy output of photos for example. HP were the only vendor whose mid-tier and premium consumer printers answered these standards as I have discovered while reviewing some of their products.
The Printer Working Group started working on IPP Everywhere as a way to achieve driver-free printing via the network or direct connections for both consumer and business applications. This even was about exposing printer capabilities and features without the need of adding in special software to do something like stapling or supporting PIN-driven secure job release.
One of the standard page-description languages specified for IPP Everywhere was the Adobe PDF format which is infact used for “download-to-print” situations. This is because it is seen as a file format that represents “electronic hard copy” and the common practice in the “download-to-print” use case is to prepare a document as a PDF file before making it available. The IPP Everywhere approach also included and defined a use case of “printing by reference” where the printer “fetches” the PDF document off the Web server via a known URL for printing rather than the user downloading it to their computing device in order to turn out a hard copy of it.
Most iPhones and iPads implement AirPrint to allow for driver-free mobile printing
Apple was the first to make a serious breakthrough for driver-free printing and the IPP Everywhere goal when they added AirPrint to the version 4.2 of the iOS platform. This was important for iOS due to the desire not to add any extra machine-specific code for particular printers since the iPad, iPhone and iPod Touch were mobile devices with constrained memory and storage space.
Google initially achieved something similar with their Google Cloud Print ecosystem which was being pitched for ChromeOS and Android. But this worked as a cloud-driven or hosted variation of print management solutions pitched at enterprises which offered a form of driverless or universal-driver printing to that user base.
But the Mopria Alliance have made a serious step closer with driverless printing by creating a network-based printing infrastructure for the Android platform. Google followed up the Cloud Print program with the Android Print Service software ecosystem which uses “plugins” that work in a same way to drivers. Here, the Mopria Alliance, founded by Canon, HP, Samsung and Xerox, worked towards a single plugin for driver-free printing and had these companies install firmware in their machines to present themselves across a logical network to Mopria-compliant hosts as well as process print jobs for these hosts.
What needs to happen
All printers that work with any network need to support AirPrint, IPP Everywhere and Mopria no matter what position they hold in a manufacturer’s product lineup. This will then incentivise the idea of driver-free network printing.
The IT industry also needs to investigate the use of device classes / profiles within the USB and Bluetooth standards to facilitate driver-free direct printing. This is because USB and Bluetooth are seen as connection types used for directly connecting a peripheral to a host computer device rather than connecting via a network. As well, driver-free direct printing could open up more use cases involving printing from dedicated-function devices.
Similarly, Microsoft needs to implement Mopria and/or IPP Everywhere in to Windows as part of a default print driver delivered with the desktop operating system. This would then allow for truly-portable printing from laptops, tablets and 2-in-1s running the Windows operating system.
Driver-free printing could come in to its own with interactive TV especially when you are dealing with cooking shows like MasterChef
A use case that needs to be put forward for driver-free printing is its relevance with interactive TV. In this case, it could be about watching a TV show whether linearly or on-demand, including watching content held on Blu-Ray discs and being able to, at a whim, print out resources relating to that show. Situations that can come up include printing a “white paper” associated with a public-affairs show or printing a recipe that was demonstrated in a cooking show. Even advertising could lead towards the ability for users to print out coupons in response to advertised specials, something that would be valued in the USA where clipping coupons for special deals is the norm; or complete a booking for an advertised event with the printer turning out the tickets. Such a concept can also extend to other “lean-back” apps offered on a smart-TV platform by providing a printing option to these apps.
But this would be about achieving a user experience that is about selecting the resource to print and instantiating the print job from a 10-foot “lean-back” user experience using a limited remote control. It would also include advertising the fact that printable resources exists for that show that you can print using the interactive-TV platform. Similarly, interactive-TV platforms like HBBTV, media-storage platforms like Blu-Ray, and smart-TV / set-top-box platforms like tvOS, Android TV or Samsung Smart Hub would need to support one or more of the driver-free printing platforms. In the case of tvOS, Apple could simply add AirPrint functionality to that set-top operating system so you could print from your Apple-TV-based setup.
The idea of driver-free printing will also be relevant to the smart home especially if it is desirable for devices therein to be able to provide hard copy on demand. For example, kitchen appliances that have access to online recipe libraries, an idea positioned by most of the big names in this field, may benefit from this feature because you could configure them to be set up for a particular recipe while your printer turns out the actual recipe with the ingredients list. But this concept will need to be driven by the use of “print by reference” standards for access to online resources.
As well, a driver-free printing setup should be able to recognise label and receipt printers in order to permit transaction-driven printing using these devices. For example, address labels could be turned out as a sheet of paper with all the labels on a regular printer or as a run of labels emerging from a label printer.
How could this affect printer design and product differentiation
The use of driver-free printing won’t deter printer manufacturers from improving their products’ output speed and quality. Infact, the use of standard page-description languages will lead towards the development of high-speed coprocessors and software that can quickly render print jobs sent to them in these formats.
There will also be a competitive emphasis on the number of functions available at a multifunction printer’s control panel with this being driven by app platforms maintained by the various printer manufacturers. Like with smart TVs, it could lead towards third parties including alliances developing app platforms for manufacturers who don’t want to invest in developing and maintaining an app platform.
Let’s not forget that printer manufacturers will maintain the “horses for courses” approach when it comes to designing printer models for both home and business use. But it will lead to an emphasis on refining the various product classes without needing to think about shoehorning driver and print-monitor software for the various host devices.
Once we see driver-free printing, it can lead towards simplified real plug-and-play printer setup for all kinds of users. Similarly it opens up printers towards a large class of device types beyond mobile and desktop computing devices.
There are many of the USB hubs that allow multiple USB devices to be connected to the one USB port. As well, some devices like external hard disks and keyboards are being equipped with their own USB hubs.
USB sockets on printers like this Brother colour laser won’t easily support USB hub operation even if they have a use case for that application
The use of a USB hub is also used as an approach for creating multiple-function USB peripheral devices. Similarly, a device with multiple USB sockets for connecting peripheral devices would have the socket collection seen as a “root hub” if one controller chipset looks after that socket collection. It can also appeal to dedicated-function devices like routers, NAS devices, home entertainment or automotive infotainment setups offered in the aftermarket context where the manufacturer sees these devices as the hub of a system of devices.
USB hubs are divided between the “bus-powered” types powered by the host device and the “self-powered” types that have their own power-supply. The latter type can be a USB device like a printer or external hard disk that has its own power supply or a “bus-powered” USB hub that has a DC input socket for a power supply so it can become a “self-powered” hub.
A typical USB hub which may cause problems with concurrently running multiple devices from a dedicated-function device
The idea of implementing a USB hub with a dedicated-function device can have a strong appeal with a variety of device types and combinations. For example, a router would implement a USB port for connecting a USB Mass-Storage Device like an external hard disk so it can become its own file server but also see this port for use with a USB mobile-broadband modem as a failover Internet-connection option. Or a business-grade printer which supports PIN-protected “secure job release” may use a keypad compliant to USB Human-Interface-Device specifications connected to its USB port which facilitates “walk-up” printing from a USB memory key. Even a Smart TV or set-top box may use the one USB port for viewing files from one or more Mass-Storage devices and / or work with a Webcam and a software client to be a group videophone terminal.
USB sockets on consumer-electronics equipment may not properly support USB hubs
To the same extent, this could be about a setup involving a multifunction peripheral device. An example of this would be a USB keyboard with an integrated pointing device like a trackpad, trackball or thumbstick being connected to a games console or set-top box, with this setup allowing for the pointing device serving to navigate the user interface while the keyboard answers text-entry needs.
A problem that can occur with using USB hubs or hub-equipped USB peripherals with dedicated-function devices like printers, NAS devices or consumer-AV equipment is that such devices may not handle USB hubs consistently. For example, a USB keyboard that has a hub function may not be properly detected by a set-top box or games console.
This can happen due to a power limit placed on the host’s USB port, which can affect many devices connected behind a bus-powered USB hub. Or a very common reality is that the firmware for most dedicated-function devices is written to expect a single USB device having only one function to be connected to the device’s USB port.
What needs to happen is for a dedicated-function device to identify and enumerate each and every USB peripheral device it can properly support that is connected to its USB port whether directly or via a hub. This would be based on how much power is comfortably available across the USB bus whether provided by the host or downstream self-powered USB hubs. It is in addition to the device classes that are supported by the host device to fulfil its functions.
I previously touched on this issue in relationship to USB storage devices that contain multiple logical volumes being handled by dedicated-function devices. This was to address a USB memory key or external hard disk partitioned to multiple logical volumes, a multiple-slot memory-card adaptor presenting each slot as its own drive letter or devices that have fixed storage and removeable storage. There, I was raising how a printer or a stereo system with USB recording and playback could handles these USB devices properly.
Then the device may need to communicate error conditions concerning these setups. One of these would be a insufficient-power condition where there isn’t enough power available to comfortably run all the devices connected to the USB port via the hub. This may be with situations like external hard disks connected to the host device via a bus-powered hub along with other peripherals or a self-powered hub that degrades to bus-powered operation due to its “wall-wart” AC adaptor falling out of the power outlet or burning out. Here, such a status may be indicated through a flashing light on a limited-interface device like a router or a USB “too many devices” or “not enough power” message on devices that have displays.
If the USB bus exists with the hub in place but none of the connected devices are supported by the host’s firmware, you could see an error message with “unsupported devices” or “charging only” appear on the device. Otherwise, all supported devices would then be identified and enumerated no matter where they exist in the USB chain.
In this kind of situation, there would be an emphasis on using class-driver software for the various USB Device Classes that are relevant to the device’s functionality although there are some situations like USB modems may call for device-specific software support.
What would be essential for the USB hub or multifunction device to work properly with a dedicated-function device is that the device’s firmware has to support the USB Hub device class, including providing proper and consistent error handling. To the same extent, AC-powered devices like printers or home-entertainment equipment would need to provide a power output at its USB ports equivalent to what is offered with a regular desktop computer’s USB ports.
Dell used the same approach as Ford did in the 1960s with the original Mustang
During the heyday of the “good cars” that was represented through the 1960s and 1970s, the major vehicle builders worked on various ways to approach younger drivers who were after something that was special.
One of these was to offer a “pony car” which was a specifically-designed sporty-styled two-door car that had a wide range of power, trim and other options yet had a base model that was affordable to this class of buyer. Another was to place in to the product lineup for a standard family-car model a two-door coupe and / or a “sports sedan” / “sports saloon” that is a derivative of that standard family car and built on that same chassis but known under an exciting name with examples being the Holden Monaro or the Plymouth Duster. This would be available as something that young people could want to have when they are after something impressive.
Both these approaches were made feasible through the use of commonly-produced parts rather than special parts for most of the variants or option classes. As well, there was the option for vehicle builders to run with variants that are a bit more special such as racing-homologation specials as well as providing “up-sell” options for customers to vary their cars with.
The various laptop computer manufacturers are trying to work on a product class that can emulate what was achieved with these cars. Here, it is to achieve a range of affordable high-performance computers that can appeal to young buyers who want to play the latest “enthusiast-grade” games on.
The Dell Inspiron 15 7000 Gaming laptop – to be superseded by the Dell G Series
One of the steps that has taken place was to offer a high-performance “gaming-grade” variant of a standard laptop model like the Dell Inspiron 15 Gaming laptop, one of which I had reviewed. This approach is similar to offering the “Sport” or “GT” variant of a common family-car model, where the vehicle is equipped with a performance-tuned powertrain like the Ford Falcon GT cars.
But Dell have come very closer to the mark associated with either the “pony cars” or the sporty-styled vehicles derived from the standard family-car model with the release of the Inspiron G series of affordable gamer-grade laptops. Here, they released the G3, G5 and G7 models with baseline models being equipped with traditional hard disks and small RAM amounts. But these were built on a very similar construction to the affordable mainstream laptops.
These models are intended to replace the Inspiron 15 Gaming series of performance laptops and it shows that they want to cater to the young gamers who may not afford the high-end gaming-focused models. As well, the G Series name tag is intended to replace the Inspiron nametag due to its association with Dell’s mainstream consumer laptop products which takes the “thunder” out of owning a special product. This is similar to the situation I called out earlier with sporty vehicles that are derivatives of family-car models having their own nameplate.
The G3, which is considered the entry-level model, comes with a 15” or a 17” Full-HD screen and is available in a black or blue finish with the 15” model also available in white. It also has a standard USB-C connection with Thunderbolt 3 as an extra-cost “upsell” option along with Bluetooth 5 connectivity. This computer is the thinnest of the series but doesn’t have as much ventilation as the others.
The G5 which is the step-up model, is a thicker unit with rear-facing ventilation and is finished in black or red. This, like the G7 is equipped with Thunderbolt 3 for an external graphics module along with Bluetooth 4 and has the ability for one to buy a fingerprint scanner as an option. Also it comes only with a 15” screen available in 4K or Full HD resolution.
The G7 is the top-shelf model totally optimised for performance. This is a thicker unit with increased ventilation and implements high-clocked CPU and RAM that is tuned for performance. It has similar connectivity to the G5 along with similar display technology and is the only computer in the lineup to implement the highly-powerful Intel Core i9 CPU that was launched as the high-performance laptop CPU as part of the latest Coffee Lake lineup.
All the computers will be implementing the latest Coffee Lake lineup of Intel high-performance Core CPUs, being the Core i5-8300HQ or Core i7-8750H processors depending on the specification. In the case of the high-performance G7, the Intel Core i9-8950HQ CPU will be offered as an option for high performance.
They all use standalone NVIDIA graphics processors to paint the picture on the display with a choice between the GeForce GTX1060 with Max-Q, the GeForce GTX1050Ti or the GeForce GTX1050. What is interesting about the GeForce GTX1060 with Max-Q is that it is designed to run with reduced power consumption and thermal output, thus allowing it to run cool in slim notebooks and do away with fans. But the limitation here is that the computer doesn’t have the same kind of graphics performance compared to a fully-fledged GeForce GTX1060 setup which would be deployed in the larger gaming laptops.
Lower-tier packages will run with mechanical hard drives while the better packages will offer use of hybrid hard disks (increased solid-state cache), solid-state drives or dual-drive setups with the system drive (C drive with operating system) being a solid-state device and data being held on a 1Tb hard disk known as the D drive.
I would see these machines serving as a high-performance solo computer for people like college / university students who want to work with high-end games or put their foot in to advanced graphics work. As well, I wouldn’t put it past Lenovo, HP and others to run with budget-priced high-performance gaming laptops in order to compete with Dell in courting this market segment.
Highly-portable computers of the same ilk as the Dell Inspiron 13 7000 2-in-1 will end up with highly-capable graphics infrastructure
A major change that will affect personal-computer graphics subsystems is that those subsystems that have a highly-capable graphics processor “wired-in” on the motherboard will be offering affordable graphics performance for games and multimedia.
One of the reasons is that graphics subsystems that are delivered as an expansion card are becoming very pricey, even ethereally expensive, thanks to the Bitcoin gold rush. This is because the GPUs (graphics processors) on the expansion cards are being used simply as dedicated computational processors that are for mining Bitcoin. This situation is placing higher-performance graphics out of the reach of most home and business computer users who want to benefit from this feature for work or play.
But the reality is that we will be asking our computers’ graphics infrastructure to realise images that have a resolution of 4K or more with high colour depths and dynamic range on at least one screen. There will even be the reality that everyone will be dabbling in games or advanced graphics work at some point in their computing lives and even expecting a highly-portable or highly-compact computer to perform this job.
Integrated graphics processors as powerful as economy discrete graphics infrastructure
One of the directions Intel is taking is to design their own integrated graphics processors that use the host computer’s main RAM memory but have these able to serve with the equivalent performance of a baseline dedicated graphics processor that uses its own memory. It is also taking advantage of the fact that most recent computers are being loaded with at least 4Gb system RAM, if not 8Gb or 16Gb. This is to support power economy when a laptop is powered by its own battery, but these processors can even support some casual gaming or graphics tasks.
Discrete graphics processors on the same chip die as the computer’s main processor
This Intel CPU+GPU chipset will be the kind of graphics infrastructure for portable or compact enthusiast-grade or multimedia-grade computers
Another direction that Intel and AMD are taking is to integrate a discrete graphics subsystem on the same chip die (piece of silicon) as the CPU i.e. the computer’s central “brain” to provide “enthusiast-class” or “multimedia-class” graphics in a relatively compact form factor. It is also about not yielding extra heat nor about drawing on too much power. These features are making it appeal towards laptops, all-in-one computers and low-profile desktops such as the ultra-small “Next Unit of Computing” or consumer / small-business desktop computers, where it is desirable to have silent operation and highly-compact housings.
Both CPU vendors are implementing AMD’s Radeon Vega graphics technology on the same die as some of their CPU designs.
Interest in separate-chip discrete graphics infrastructure
The Dell Inspiron 15 7000 Gaming laptop – the kind of computer that will maintain traditional soldered-on discrete graphics infrastructure
There is still an interest in discrete graphics infrastructure that uses its own silicon but soldered to the motherboard. NVIDIA and AMD, especially the former, are offering this kind of infrastructure as a high-performance option for gaming laptops and compact high-performance desktop systems; along with high-performance motherboards for own-build high-performance computer projects such as “gaming rigs”. The latter case would typify a situation where one would build the computer with one of these motherboards but install a newer better-performing graphics card at a later date.
Sonnet eGFX Breakaway Puck integrated-chipset external graphics module – the way to go for ultraportables
This same option is also being offered as part of the external graphics modules that are being facilitated thanks to the Thunderbolt 3 over USB-C interface. The appeal of these modules is that a highly-portable or highly-compact computer can benefit from better graphics at a later date thanks to one plugging in one of these modules. Portable-computer users can benefit from the idea of working with high-performance graphics where they use it most but keep the computer lightweight when on the road.
Graphics processor selection in the operating system
Here, it avoids the issues associated with NVIDIA Optimus and similar multi-GPU-management technologies where this feature is managed with an awkward user interface. They are even making sure that a user who runs external graphics modules has that same level of control as one who is running a system with two graphics processors on the motherboard.
What I see now is an effort by the computer-hardware industry to make graphics infrastructure for highly-compact or highly-portable computers offer similar levels of performance to baseline or mid-tier graphics infrastructure available to traditional desktop computer setups.
This laptop, which is the smallest thinnest 15” portable, comes in with a thickness of 16mm when either closed or folded over as a tablet. This is brought about due to the implementation of the single-die chip which has the Intel 8th Generation Core CPU and an AMD Radeon RX Vega M GL graphics processor with 4Gb of display memory to “paint” with. The computer press see this setup being equivalent to an NVIDIA GeForce GTX 1050 dedicated GPU.
It is allowing Dell to pitch the XPS 15 2-in-1 as an “enthusiast-grade” lightweight 2-in-1 laptop with the kind of performance that would please people who are into multimedia and animation work or want to play most of the newer games.
Another influence is the use of a “maglev” keyboard which uses magnets to provide the tactile equivalent of a keyboard with a deeper throw. But this allows also for a slim computer design.
The new Dell XPS 15 2-in-1 computer can be configured with an Intel Core i5 as the baseline option or an Intel Core i7 as the performance option. The touchscreen can be a Full HD display as a baseline option or a 4K UltraHD display with the 100% Adobe colour gamut for the premium option.
The RAM available ex-factory can range between 8Gb to 16Gb while the storage capacity that is available ex-factory ranges from 128Gb to 1Tb on a solid-state drive. Personally, I would like to see the minimum storage capacity available being 256Gb. The only removable storage option integrated in this computer is a microSD card slot, which may require you to use a microSD card and SD card adaptor in your camera or carry a USB-C SD card reader for your digital camera’s SD memory card.
The baseline price for this model intended to be available in the USA in April is expected to be US$1299. Personally I would see the Intel CPU/GPU chipset preparing the path for a slow return of the “multimedia laptop” but in a lightweight manner and with a larger battery.
Intel Corporation is introducing the 8th Gen Intel Core processor with Radeon RX Vega M Graphics in January 2018. It is packed with features and performance crafted for gamers, content creators and fans of virtual and mixed reality. (Credit: Walden Kirsch/Intel Corporation)
Intel have used the Consumer Electronics Show 2018 to premiere a system-on-chip that is to affect how portable and small-form-factor computers will perform.
This chip, part of the 8th generation of Intel CPUs contains an 8th Generation Core i5 or i7 CPU along with an AMD Radeon RX Vega M discrete graphics processor and an Intel HD 630 integrated graphics processor.
It is positioned in the Intel 8th Generation processor lineup which is like this:
G-Series processors that are also equipped with the above-mentioned Radeon RX Vega M graphics processors. These are pitched as a performance option which would appeal to most gamers, virtual-reality / augmented-reality enthusiasts and content creators who want a machine with that bit of “pep” when it comes to graphics.
H-Series processors which are pitched towards those who want the highest performance and would rely on a dedicated graphics processor. Here, they would apply to the gaming rigs and workstations where the goal is for full-on performance.
What is special about these Intel processors
These Intel processors place the Core CPU and the AMD GPU on the same die along with a stack of dedicated graphics RAM and they are linked using the EMIB (Embedded Multi-Die Interconnect Bridge). This arrangement provides a short link between each component to provide for quick data transfer. There is also a power-optimised design to allow for efficient power use by all the components on the chip.
There are two variants of the graphics subsystem available for the chipset known as the GL and the GH. The GL (Graphics Low) variant is optimised with less than 65 watts power draw and is pitched towards “thin-and-light” laptops and the like. The GH (Graphics High) variant is a higher-performance variant that draws less than 100 watts of power and only comes with the Core i7 CPU. Here, it is pitched towards the small-form-factor desktops, all-in-ones and similar computers that normally work from a constant power supply.
All that horsepower in those dies can allow the computer to paint an image across nine display devices at once. The fact that there is an integrated graphics processor on board can allow these “system-on-chip” setups to engage in “performance / economy” switching to maximise power efficiency.
Where are they being premiered in?
The first two variants are the Core i7-8809G CPU with Radeon RX Vega M GH for performance and the Core i7-8705G CPU with Radeon RX Vega M GL as the value option.
These are being released to go with the the Hades Canyon series of “Next Unit Of Computing” small-form-factor computers. Both of these computers are available as a kit which can support 32Gb (2 x 16Gb) DDR4 RAM and 2 M2-compliant solid-state drives. These have plenty of USB connections including 2 Thunderbolt-3 sockets and can connect to your home network via one of two Gigabit Ethernet sockets or 802.11ac Wi-Fi.
What kind of impact do I see these Intel chips have on computer design?
One class of computer that will definitely benefit will be the portable computers that most of us will consider purchasing. The computing press see a benefit when it comes to “enthusiast-class” laptops where they will benefit from a slimmer chassis along with the ability to run in a quiet and cool manner yet deliver the performance they are known for. It will also lead to longer battery runtimes like nine hours even while engaging in high-performance work.
But I see computer manufacturers deploying these CPU/GPU chipsets as the standard expectation for the mainstream 13”-15” home or business laptops that are their “bread and butter” products. Typically these machines have a larger chassis than the ultraportables and are valued by most users for factors like durability, connectivity and ability to choose different configuration options. Here, the manufacturers can design in larger battery packs or extra peripherals like multiple storage devices or optical drives or even improve how these computers sound by using larger speakers.
Let’s not forget that the computer manufacturers could also offer in their ultraportable lineup a run of computer products that are thin and light yet powerful.
As far as sessile computers are concerned, I would see that ultra-small “next unit of computing” units benefit along with the all-in-ones that have the computing electronics part of the screen. Other traditional desktop computers that could also benefit include those that are the same size and shape as typical consumer-electronics devices.
I would see Intel’s 8th-generation “Coffee Lake” G-series CPU/GPU hybrid chip being something that offers greater potential for how the personal computer is designed without losing the desire for more computing power.
Just lately, Intel released their 8th generation Kaby Lake R family of “Core i” processors which are targeted at portable computers. These powerful CPUs that were optimised for portable use were issued with an intent to compete against AMD’s upcoming release of their Ryzen processors, pitched at a similar usage scenario. Various press articles even drew attention towards being able to play more powerful PC games on these lightweight computers rather than limiting their scope of activity.
Now AMD have released this silicon which also integrates the Radeon Vega graphics-processing silicon for the laptop market. This is where they are targeting the Ryzen 7 2700U CPU and the Ryzen 5 2500U 15-watt processors and instigating a race against Intel’s Kaby Lake R horsepower and QHD integrated graphics.
What I see of this is that Intel and AMD will make sure that this generation of ultraportable computers will be seen to be more powerful than the prior generations. Think of using an Intel Kaby Lake R Core i7 or AMD Ryzen 7 powered 2-in-1 for most photo-editing tasks or as a “virtual turntable” in the DJ booth, activities that wouldn’t be associated with this class of computer.
At the moment, Intel hasn’t licensed the Thunderbolt 3 connectivity standard across the board including to AMD, which will see it as a limitation when it comes to allow users to upgrade graphics capabilities on their AMD Rysen-equipped laptops using an external graphics module.
One way Intel could approach this is to divest the Thunderbolt standards and intellectual property to an independent working group like the USB.org group so that manufacturers who implement Intel, AMD, the ARM RISC-based vendors like Qualcomm or other silicon can use Thunderbolt 3 as a high-throughput external connectivity option. This could be a way to establish an even playing field for all of the silicon vendors who are providing processor power for all the various computing devices out there.
At least Intel and AMD are taking steps in the right direction towards the idea of mixing portability and power for computing setups based on regular-computer platforms. It may also make this kind of performance become affordable for most people.
Intel are releasing the eighth-generation lineup of CPU processors which have been considered a major step when it comes to performance from the “engines” that drive your computer. This is affecting the the Core i family of processors which are used in most desktop and laptop computers issued over the last few years.
There are three classes of the 8th Generation lineup – the Coffee Lake which is pitched at desktops, the Cannon Lake which is pitched at mobile applications and the Kaby Lake Refresh which also is pitched at most of the ultraportables including the 2-in-1s.
This class of CPU has impressed me more with the arrival of ultraportable computers, especially 2-in-1 detachables and convertibles, that could do more than what is normally associated with this class of computer.
It is brought about through an increase in the number of “cores” or processor elements installed in the physical chip die, similar to the number of cylinders in your car’s engine which effectively multiply the power available under that hood. In this case, the improvements that Intel were providing were very similar to what happened when the “V” configuration was implemented for engine-cylinder layouts that allowed more power from a relatively-compact engine, allowing the vehicle builder to offer increasingly-powerful engines for the same vehicle design.
In this case, there was the ability to use low-power processors like 15-watt designs with the increased “cores” but not sacrifice battery runtime or yield too much waste heat. This opened up the capability for an ultraportable or tablet to be able to do more without becoming underpowered while running for a long time on battery power.
For example, HP just released the ZBook x2 detachable tablet computer which has the kind of power that would work with advanced graphics and allied programs. Some could see this as a typical detachable tablet that could be considered not so powerful but this handheld workstation can use these programs thanks to use of the Intel 8th Generation Core i7 Kaby Lake R processor and NVIDIA Quadro discrete graphics. There is even the option to have it specified with 32Gb of RAM.
Then there’s Dell who have refreshed their XPS and Inspiron ultraportables with Intel 8th-generation horsepower with the XPS 13 benefiting from that extra performance, making the whole XPS 13 clamshell Ultrabook lineup show its relevance more.
It will be up to the software vendors to make games and other software that take advantage of these high-performance 2-in-1 computers by exploiting the touchscreens and the higher power offered by these machines. How about a Civilization, SimCity, one of the mobile “guilty-secret” games, or more being available through the Microsoft Store for one to install on that 2-in-1?
Western Digital had broken the record for data stored on a 3.5” hard disk by offering the HGST by WD UltraStar HS14 hard disk.
This 3.5” hard disk is capable of storing 14Tb of data and has been seen as a significant increase in data-density for disk-based mechanical data storage. It implements HelioSeal construction technology which yields a hermetically-sealed enclosure filled with helium that leads to thinner disks which also permit reduced cost, cooling requirements and power consumption.
At the moment, this hard disk is being pitched at heavy-duty enterprise, cloud and data-center computing applications rather than regular desktop or small-NAS applications. In this use case, I see that these ultra-high-capacity hard disks earn their keep would be localised data-processing applications where non-volatile secondary storage is an important part of the equation.
Such situations would include content-distribution networks such as the Netflix application or edge / fog computing applications where data has to be processed and held locally. Here, such applications that are dependent on relatively-small devices that can be installed close to where the data is created or consumed like telephone exchanges, street cabinets, or telecommunications rooms.
I would expect that this level of data-density will impact other hard disks and devices based on these hard disks. For example, applying it to the 2.5” hard-disk form factor could see these hard disks approaching 8Tb or more yielding highly capacious compact storage devices. Or that this same storage capacity is made available for hard drives that suit regular desktop computers and NAS units.
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.