Now with three major desktop computing platforms and two mobile computing platforms on the market, there is a demand to create software that can run on all of them. It also means that the software has to operate in a manner that suits the different user experiences that different computing devices offer.
.. and tablets
The differing factors for the user experiences include screen size and general aspect ratio as in “portrait” or “landscape”; whether there is a keyboard, mouse, stylus or touchscreen as a control interface; or, nowadays, whether there are two or more screens. Then you have to think of whether to target a mobile use case or a regular-computer use case and optimise your software accordingly. You may even have to end up targeting “small mobile” (smartphone), “large mobile” (iPad or similar tablet), “desktop” (desktop or laptop computer including 2-in-1 convertibles) or “lean-back” (smart TV / set-top / games console) use cases at once.
.. and laptops with the same codebase
Google and Microsoft have established a partnership to make Google’s Flutter 2 software development platform as something to create desktop+mobile software solutions. It is building on Microsoft’s foundation stones like their BASIC interpreters which got most of us in to personal computing and software development.
Here it is about creating common codebases for native apps that target iOS, Android, Windows 10, MacOS and Linux; alongside Web apps to work with Chrome, Firefox, Safari and Edge. But the question that could be raised is if an app is targeted for Google Chrome, would this work fully with other Chromium-based Web browsers like the new Microsoft Edge browser, the Opera browser or Chromium for Linux.
The creation of Web apps may be about being independent of platform app stores which have a strong upper hand on what appears there. Or it may be about reaching devices and platforms that don’t have any native software development options available to average computer programmers.
Some of the targeted approaches for this new platform would include “progressive Web apps” that can run on many platforms using Web technology and omit the Web-browser “chrome” while these apps run.
The new Flutter 2 platform will also be about creating apps that take advantage of multiple-screen and foldable setups. This is in addition to creating fluid user interfaces that can run on single-screen desktop, tablet and smartphone setups. The idea of creating a user interface for multiple-screen and foldable setups is seen as catering to a rare use case because of fewer foldable devices like the Microsoft Surface Duo on the market let alone in circulation. Another question that can crop up is multiple-screen desktop-computing setups and how to take advantage of them when creating software.
What I see of this is the rise of software-development solutions that are about creating software for as many different computing platforms as possible.
But they have extended this to an eBook reader with a larger 7.8” display but achieving the same “dot-per-inch” resolution as the 6” model. The frontlight is even designed to work properly with E-Ink Kaleido and yield the best visual performance even if it is turned down to the lowest level.
Most of the features for this PocketBook InkPad Color are the same for both the PocketBook Color eBook readers with things like text-to-speech, Bluetooth connectivity, and support for audio files based on MP3, Ogg Vorbis and AAC codecs. But it also has Wi-Fi which would come in to its own for downloading eBooks and other “electronic hard copy” material from PocketBook’s own electronic bookstore, Dropbox and ReadRate. It also has a built-in RSS-based Webfeed reader for those of us who follow blogs and other online services using this standard technology.
The large colour display may come in to its own with graphic novels or other illustrated material. I would see this more so in France and Belgium where the “BD” visual novels and comic albums are an artform unto themselves. Even business and education would value the large colour screen for illustrated materials delivered in electronic hard copy.
The PocketBook InkPad Color will weigh in at 225g even though it has the large screen. It will cost EUR€299 in Europe or US$330 in the USA.
It will be interesting to know how the E-Ink Kaleido technology will be taken further. In the near term, it could be about moving towards larger colour e-ink displays. But it could also lead towards work on photo-quality colour e-ink displays, making for electronic photo frames that use this technology or even towards colour digital signage.
What needs to happen is for more eBook readers to license and implement colour e-ink technology. Here, a colour display can be seen by an e-book reader manufacturer as a product differentiator just as size or network / Internet connectivity is used for that purpose. It can encourage authors and publishers to use colour as a drawcard for their eBook versions of their works.
A German company has fielded a videoconferencing packaging which is Europe’s answer to what Zoom, Skype and Microsoft Teams is about. This is part of a variety of efforts by European governments and businesses to create credible mainstream IT service alternatives to what the USA and China are offering while respecting European values. One example is efforts by Germany to create a public data-processing cloud that is within that country’s borders as part of leading an effort towards a Europe-wide public cloud.
This is in the form of Alfaview which provides a Zoom-style experience
This company, Alfatraining Bildungszentrum GmbH which is based in Karlsruhe, Baden-Württenburg, Germany, has released the Alfaview video-conferencing platform. Here, this platform places privacy and European sovereignty first in the way it is engineered.
The Alfaview platform’s servers are based in Germany and the company heavily underscores the spirit of European values especially with the GDPR directive. Videoconferencing data is encrypted using TLS/AES256 protocols during conversations. But they can allow the use of non-German services as long as they are in the EU, again underscoring European values. There will also be the ability for people to join the platform from all over the world, thus avoiding a problem with European technologies and services where they have limited useability from areas beyond Europe.
As well, it answers the weaknesses that are associated with the videoconferencing establishment when it comes to offering this kind of service for consumers and small businesses. This encompasses Zoom not being all that secure, Microsoft not maintaining Skype and focusing the Teams videoconferencing package just for big business. As well, Facebook who has come on the bandwagon with Messenger Rooms is not all that respected when it comes to security and privacy.
Alfaview runs natively on Windows, MacOS, Linux (Debian package), iOS and will soon be ported for Android. But they could simply reuse the Linux package as a code base for reaching out to ChromeOS and Android platforms. As well, I am not sure if the iOS version is optimised for the iPads which is something I consider of importance for mobile platforms that have tablet devices because these devices have a strong appeal to multi-party video conferences.
There is a free package for individuals and families to use which provides for one room that has 50 participants. As well, Alfaview has a Free Plus package pitched towards the education and non-profit sector. Here, this one has most of the features that the corporate package has like 40 rooms per account with 50 participants. There is also the ability to run 10 concurrent breakout groups per room.
This is in conjunction to various paid plans for ordinary businesses to buy in to for their videoconferencing needs. Alfaview even provides the ability to offer the software in a “white-label” form for companies to brand themselves.
But what I see of the Alfaview approach is that the Europeans are offering a Zoom-style service respecting their values and competing with what the Silicon Valley establishment are offering.
Lenovo Flex 5G / Yoga 5G convertible notebook which runs Windows on Qualcomm ARM silicon – the first laptop computer to have 5G mobile broadband on board
Increasingly, regular computers are moving towards the idea of having processor power based around either classic Intel (i86/i64) or ARM RISC microarchitectures. This is being driven by the idea of portable computers heading towards the latter microarchitecture as a power-efficiency measure with this concept driven by its success with smartphones and tablets.
It is undertaking a different approach to designing silicon, especially RISC-based silicon, where different entities are involved in design and manufacturing. Previously, Motorola was taking the same approach as Intel and other silicon vendors to designing and manufacturing their desktop-computing CPUs and graphics infrastructure. Now ARM have taken the approach of designing the microarchitecture themselves and other entities like Samsung and Qualcomm designing and fabricating the exact silicon for their devices.
Apple to move the Macintosh platform to their own ARM RISC silicon
As well, the Linux community have established Linux-based operating systems on ARM microarchitectore. This has led to Google running Android on ARM-based mobile and set-top devices and offering a Chromebook that uses ARM silicon; along with Apple implementing it in their operating systems. Not to mention the many NAS devices and other home-network hardware that implement ARM silicon.
Initially the RISC-based computing approach was about more sophisticated use cases like multimedia or “workstation-class” computing compared to basic word-processing and allied computing tasks. Think of the early Apple Macintosh computers, the Commodore Amiga with its many “demos” and games, or the RISC/UNIX workstations like the Sun SPARCStation that existed in the late 80s and early 90s. Now it is about power and thermal efficiency for a wide range of computing tasks, especially where portable or low-profile devices are concerned.
Already mobile and set-top devices use ARM silicon
I will see an expectation for computer operating systems and application software to be written and compiled for both classic Intel i86 and ARM RISC microarchitectures. This will require software development tools to support compiling and debugging on both platforms and, perhaps, microarchitecture-agnostic application-programming approaches. It is also driven by the use of ARM RISC microarchitecture on mobile and set-top/connected-TV computing environments with a desire to allow software developers to have software that is useable across all computing environments.
.. as do a significant number of NAS units like this WD MyCloud EX4100 NAS
Some software developers, usually small-time or bespoke-solution developers, will end up using “managed” software development environments like Microsoft’s .NET Framework or Java. These will allow the programmer to turn out a machine-executable file that is dependent on pre-installed run-time elements for it to run. These run-time elements will be installed in a manner that is specific to the host computer’s microarchitecture and make use of the host computer’s needs and capabilities. These environments may allow the software developer to “write once run anywhere” without knowing if the computer the software is to run on uses an i86 or ARM microarchitecture.
There may also be an approach towards “one-machine two instruction-sets” software development environments to facilitate this kind of development where the goal is to simply turn out a fully-compiled executable file for both instruction sets.
It could be in an accepted form like run-time emulation or machine-code translation as what is used to allow MacOS or Windows to run extant software written for different microarchitectures. Or one may have to look at what went on with some early computer platforms like the Apple II where the use of a user-installable co-processor card with the required CPU would allow the computer to run software for another microarchitecture and platform.
Computer Hardware Vendors
For computer hardware vendors, there will be an expectation towards positioning ARM-based silicon towards high-performance power-efficient computing. This may be about highly-capable laptops that can do a wide range of computing tasks without running out of battery power too soon. Or “all-in-one” and low-profile desktop computers will gain increased legitimacy when it comes to high-performance computing while maintaining the svelte looks.
Personally, if ARM-based computing was to gain significant traction, it may have to be about Microsoft encouraging silicon vendors other than Qualcomm to offer ARM-based CPUs and graphics processors fit for “regular” computers. As well, Microsoft and the Linux community may have to look towards legitimising “performance-class” computing tasks like “core” gaming and workstation-class computing on that microarchitecture.
There may be the idea of using 64-bit i86 microarchitecture as a solution for focused high-performance work. This may be due to a large amount of high-performance software code written to run with the classic Intel and AMD silicon. It will most likely exist until a significant amount of high-performance software is written to run natively with ARM silicon.
Thanks to Apple and Microsoft heading towards ARM RISC microarchitecture, the computer hardware and software community will have to look at working with two different microarchitectures especially when it comes to regular computers.
This week, Apple used its WWDC software developers’ conference to announce that the Macintosh regular-computer platform will move away from Intel’s silicon to their own ARM-based silicon. This is to bring that computing platform in to line with their iOS/iPadOS mobile computing platform, their tVOS Apple TV set-top platform and their Watch platform that uses Apple’s own silicon.
Here, this silicon will use the ARM RISC instruction-set microarchitecture rather than the x86/x64 architecture used with Intel silicon. But Apple is no stranger to moving the Macintosh computing platform between microarchitectures.
Initially this platform used Motorola 680×0/PowerPC silicon which used a Motorola RISC instruction set microarchitecture. This platform initially had more chops compared to Intel’s x86 platform especially when it came to graphics and multimedia. Then, when Apple realised that Intel offered cost-effective microprocessors using the x86-64 microarchitecture and had the same kind of multimedia prowess as the Motorola processors, they moved the Macintosh platform to the Intel silicon.
But Apple had to take initiatives to bring the MacOS and Mac application software over to this platform. This required them to supply software development tools to the software-development community to allow programs that they write to be compiled for both Motorola and Intel instruction sets. They also furnished an instruction-set translator or “cross-compiler” called Rosetta to Mac users who had Intel-based Macs so they can run extant software that was written for Motorola silicon.
For a few years, this caused some awkwardness with Mac users, especially those who were early adopters, due to either the availability of software natively compiled for Intel silicon. Or they were finding that their existing Motorola-native software was running too slowly on their Intel-based computers thanks to the Rosetta instruction-set-translation software working between their program and the computer’s silicon.
Apple will be repeating this process in a very similar way to the initial Intel transition by the provision of software-development tools that build for Intel i86-64 based silicon and their own ARM-RISC based silicon. As well they will issue Rosetta2 which does the same job as the original Rosetta but translate i86-64 CISC machine instructions to the ARM RISC instruction set that their own silicon uses. Rosetta2 will be part of the next major version of MacOS which will be known as Big Sur.
The question that will be raised amongst developers and users of high-resource-load software like games or engineering software is what impact this conversion will have on that level of software. Typically most games are issued for the main games consoles and Windows-driven Intel-architecture PCs over Macs or tvOS-based Apple TV set-top devices, with software ports for these platforms coming later on in the software’s evolution.
There is an expectation that the Rosetta2 “cross-compiler” software could work this kind of software properly to a point that it can satisfactorily perform on a computer using integrated graphics infrastructure and working at Full HD resolution. Then there will be the issue of making sure it works with a Mac that uses discrete graphics infrastructure and higher display resolutions, thus giving the MacOS platform some “gaming chops”.
I see the rise of ARM RISC silicon in the tradition regular computing world and having it exist alongside classic Intel-based silicon in this computing space like what is happening with Apple and Microsoft as a challenge for computer software development. It is although some work has taken place within the UNIX / Linux space to facilitate the development of software for multiple computer types thus leading to this space bringing forth the open-source and shared-source software movements. This is more so with Microsoft where there is an expectation to have Intel-based silicon and ARM-based silicon exist alongside each other for the life of a common desktop computing platform, with each silicon type serving particular use cases.
Laptops with Intel graphics infrastructure like this Dell XPS 13 will benefit from having any manufacturer-specific customisations to the graphics driver software delivered as a separate item from that drive code
Intel is now taking a different approach to packaging the necessary Windows driver software for its graphics infrastructure. This will affect any of us who have Intel graphics infrastructure in our computers, including those of us who have Intel integrated-graphics chipsets working alongside third-party discrete graphics infrastructure in our laptops as an energy-saving measure.
Previously, computer or motherboard manufacturers who wanted to apply any customisations to their Intel integrated-graphics driver software for their products had to package the customisations with the driver software as a single entity. Typically it was to allow the computer manufacturer to optimise the software for their systems or introduce extra display-focused features peculiar to their product range.
.. even if the Intel graphics architecture is used as a “lean-burn” option for high-performance machines like this Dell Inspiron 15 7000 Gaming laptop when they are run on battery power
This caused problems for those of us who wanted to keep the driver software up-to-date to get the best out of the integrated graphics infrastructure in our Intel-based laptops.
If you wanted to benefit from the manufacturer-supplied software customisations, you had to go to the manufacturer’s software-support Website to download the latest drivers which would have your machine’s specific customisations.
Here, the latest version of the customised drivers may be out-of-step with the latest graphics-driver updates offered by Intel at its Website and if you use Intel’s driver packages, you may not benefit from the customisations your machine’s manufacturer offered.
The different approach Intel is using is to have the graphics driver and the customisations specific to your computer delivered as separate software packages.
Here, Intel will be responsible for maintaining their graphics-driver software as a separate generic package which will have API “hooks” for any manufacturer-specific customisation or optimisation code to use. Users can pick this up from the Intel driver-update download site, the manufacturer’s software update site or Windows Update. Then the computer manufacturer will be responsible for maintaining the software peculiar to their customisations and offering the updates for that software via their support / downloads Website or Microsoft’s Windows Update.
It may be seen as a two-step process if you are using Intel’s and your computer manufacturer’s Websites or software-update apps for this purpose. On the other hand, if you rely on Windows Update as your driver-update path, this process would be simplified.
With the increasing number of online services including cloud and “as-a-service” computing arrangements, there has to be a way for users to gain access to these services.
Previously, the common way was to use a Web-based user interface where the user has to run a Web browser to gain access to the online service. The user will see the Web browser’s interface elements, also known as “chrome” as part of the user experience. This used to be very limiting when it came to functionality but various revisions have allowed for a similar kind of functionality to regular apps.
Dropbox native client view for Windows 8 Desktop- taking advantage of what Windows Explorer offers
A variant on this theme is a “Web app” which provides a user interface without the Web browser’s interface elements. But the Web browser works as an interpreter between the online service and the user interface although the user doesn’t see it as that. It is appealing as an approach to writing online service clients due to the idea of “write once run anywhere”.
Another common approach is to write an app that is native to a particular computing platform and operating system. These apps, which I describe as “native clients” are totally optimised in performance and functionality for that computing platform. This is because there isn’t the need for overhead associated with a Web browser needing to interpret code associated with a Web page. As well, the software developer can take advantage of what the computing platform and operating system offers even before the Web browser developers build support for the function in to their products.
There are some good examples of online-service native clients having an advantage over Web apps or Web pages. One of these is messaging and communications software. Here, a user may want to use an instant-messaging program to communicate with their friends or colleagues while using graphics software or games which place demands on the computer. Here, a native instant-messaging client can run alongside the high-demand program without the overhead associated with a Web browser.
The same situation can apply to online games where players can see a perceived improvement in their performance. As well, it is easier for the software developer to write them to take advantage of higher-performance processing silicon. It includes designing an online game for a portable computing platform that optimises itself for either external power or battery power.
This brings me to native-client apps that are designed for a particular computing platform from the outset. One key application is to provide a user interface that is optimised for “lean-hack” operation, something that is demanded of anything that is about TV and video content. The goal often is to support a large screen viewed at a distance along with the user navigating the interface using a remote control that has a D-pad and, perhaps a numeric keypad. The remote control describes the primary kind of user interface that most smart TVs and set-top boxes offer.
Another example is software that is written to work with online file-storage services. Here, a native client for these services can be written to expose your files at the online file-storage service as if it is another file collection similar to your computer’s hard disk or removeable medium.
Let’s not forget that native-client apps can be designed to make best use of application-specific peripherals due to them directly working with the operating system. This can work well with setups that demand the use of application-specific peripherals, including the so-called “app-cessory” setups for various devices.
When an online platform is being developed, the client software developers shouldn’t forget the idea of creating native client software that works tightly with the host computer platform.
An iPhone or iPad running iOS 11 has native support for the HEIF image file format
A new image file format has started to surface over the last few years but it is based on a different approach to storing image files.
This file format, known as the HEIF or High Efficiency Image Format, is designed and managed by the MPEG group who have defined a lot of commonly-used video formats. It is being seen by some as the “still-image version of HEVC” with HEVC being the video codec used for 4K UHDTV video files. But it uses HEVC as the preferred codec for image files and will provide support for newer and better image codecs, including newer codecs offering lossless image compression in a similar vein to what FLAC offers for audio files.
Unlike JPEG and the other image files that have existed before it, HEIF is seen as a “container file” for multiple image and other objects rather than just one image file and some associated metadata. As well, the HEIF file format and the HEVC codec are designed to take advantage of today’s powerful computing hardware integrated in our devices.
The primary rule here for HEIF is that it is a container file format speci
Simple concept view of the HEIF image file format
fically for collections of still images. It is not really about replacing one of the video container file formats like MP4 or MKV which are specifically used primarily for video footage.
What will this mean?
One HEIF file could contain a collection of image files such as “mapping images” to improve image playback in certain situations. It can also contain the images taken during a “burst” shot where the camera takes a rapid sequence of images. This can also apply with image bracketing where you take a sequence of shots at different exposure, focus or other camera settings to identify an ideal image setup or create an advanced composite photograph.
This leads to HEIF dumping GIF as the carrier for animated images that are often provided on the Web. Here, you could use software to identify a sequence of images to be played like a video, including having them repeat in a loop thanks to the addition of in-file metadata. This emulates what the Apple Live Photos function was about with iOS and can allow users to create high-quality animations, cinemagraphs (still photos with a small discreet looping animation) or slide-shows in the one HEIF file.
HEIF uses the same codec associated with 4K UHDTV for digital photos
There is also the ability to store non-image objects like text, audio or video in an HEIF file along with the images which can lead to a lot of things. For example, you could set a smartphone to take a still and short video clip at the same time like with Apple Live Photos or you could have images with text or audio notes. On the other hand, you could attach “stamps”, text and emoji to a series of photos that will be sent as an email or message like what is often done with the “stories” features in some of the social networks. In some ways it could be seen as a way to package vector-graphics images with a “compatibility” or “preview” bitmap image.
The HEIF format will also support non-destructive metadata-based editing where this editing is carried out using metadata that describes rectangular crops or image rotations. This is so you could revise an edit at a later time or obtain another edit from the same master image.
It also leads to the use of “derived images” which are the results of one of these edits or image compositions like an HDR or stitched-together panorama image. These can be generated at the time the file is opened or can be created by the editing or image management software and inserted in the HEIF file with the original images. Such a concept could also extend to the rendering and creation of a video object that is inserted in the HEIF file.
HEIF makes better use of advanced photo options offered by advanced cameras
Here, having a derived image or video inserted in the HEIF file can benefit situations where the playback or server setup doesn’t have enough computing power to create an edit or composition of acceptable quality in an acceptable timeframe. Similarly, it could answer situations where the software used either at the production/editing, serving or playback devices does a superlative job of rendering the edits or compositions.
The file format even provides alternative viewing options for the same resource. For example, a user could see a wide-angle panorama image based on a series of shots as either a wide-aspect-ration image or a looping video sequence representing the camera panning across the scene.
What kind of support exists for the HEIF format?
At the moment, Apple’s iOS 11, tvOS 11 (Apple TV) and MacOS High Sierra provide native support for the HEIF format. The new iPhone and iPad provide hardware-level support for the HEVC codec that is part of this format and the iOS 11 platform along with the iCloud service provides inherent exporting of these images for email and other services not implementing this format.
Microsoft is intending to integrate HEIF in to Windows 10 from the Spring Creators Update onwards. As well, Google is intending to bake it in to the “P” version of Android which is their next feature version of that mobile platform.
As for dedicated devices like digital cameras, TVs and printers; there isn’t any native support for HEIF due to it being a relatively new image format. But most likely this will come about through newer devices or devices acquiring newer software.
Let’s not forget that newer image player and editing / management software that is continually maintained will be able to work with these files. The various online services like Dropbox, Apple iCloud or Facebook are or will be offering differing levels of HEIF image-file support the the evolution of their platform. Depending on the service, this will be to hold the files “as-is” or to export and display them in a “best-case” format.
There will be some compatibility issues with hardware and software that doesn’t support this format. This may be rectified with operating systems, image editing / management software or online services that offer HEIF file-conversion abilities. Such software will need to export particular images or derivative images in an HEIF file as a JPEG image for stills or a GIF, MP4 or QuickTime MOV file for timed sequences, animations and similar material.
In the context of DLNA-based media servers, it may be about a similar approach to an online service where the media server has to be able to export original or derived images from one of these files held on a NAS as a JPEG still image or a GIF or MP4 video file where appropriate.
As the container-based HEIF image format comes on to the scene as the possible replacement for JPEG and GIF image files, it could be about an image file format that shows promise for both the snapshooter or the advanced photographer.
A common frustration that we all face when we play video games on a laptop, tablet or smartphone is that these devices run out of battery power after a relatively short amount of playing time. It doesn’t matter whether we use a mobile-optimised graphics infrastructure like what the iPad or our smartphones are equipped with, or a desktop-grade graphics infrastructure like the discrete or integrated graphics chipsets that laptops are kitted out with.
What typically happens in gameplay is that the graphics infrastructure paints multiple frames to create the illusion of movement. But most games tend to show static images for a long time, usually while we are planning the next move in the game. In a lot of cases, some of these situations may use a relatively small area where animation takes place be it to show a move taking place or to show a “barberpole” animation which is a looping animation that exists for effect when no activity takes place.
Microsoft is working on an approach for “painting” the interaction screens in a game so as to avoid the CPU and graphics infrastructure devoting too much effort towards this task. This is a goal to allow a game to be played without consuming too much power and takes advantage of human visual perception for scaling frames needed to make an animation. There is also the concept of predictability for interpreting subsequent animations.
But a lot of the theory behind the research is very similar to how most video-compression codecs and techniques work. Here, these codecs use a “base” frame that acts as a reference and data that describes the animation that takes place relative to that base frame. Then during playback or reception, the software reconstructs the subsequent frames to make the animations that we see.
The research is mainly about an energy-efficient approach to measuring these perceptual differences during interactive gameplay based on the luminance component of a video image. Here, the luminance component of a video image would be equivalent to what you would have seen on a black-and-white TV. This therefore can be assessed without needing to place heavy power demands on the computer’s processing infrastructure.
The knowledge can then be used for designing graphics-handling software for games that are to be played on battery-powered devices, or to allow a “dual-power” approach for Windows, MacOS and Linux games. This is where a game can show a detailed animation with high performance on a laptop connected to AC power but allow it not to show that detailed animation while the computer is on battery power.
Recently the computer press has been awash with articles that Microsoft was killing the Paint app that always came with Windows since 1.0 . But they are keeping it available for Windows users to continue working with by allowing them to download it for free from the Windows Store.
The Paint app was simply a basic bitmap-driven graphics editor that allowed users to get used to using a mouse for creating computer graphics. It was based on ZSoft’s PC Paintbrush which was the PC’s answer to the various baseline graphics editors that came with every mouse-driven graphical user interface since 1984 when that kind of computing came on board with the Apple Macintosh.
This app ended up being the answer for any basic computer-graphics work at home or in the office, whether it be children creating computer drawings through to designers creating rough prototypical images of what they are designing in the office. I have infact used Paint as part of creating screenshots for this Website by editing the various screenshots whether to redact private information or to call out particular menu options that I am talking about in the accompanying article. This was thanks to an easier learning curve that this software implemented from Day 1.
A common fear that I would have expressed in relation to the press coverage about Microsoft abandoning or paying less attention to Paint and other bundled or cost-effective graphics tools (remember PhotoDraw?) is that they could end up stripping down their application-software portfolio of titles seen to be less valuable. Then they would just focus their efforts on the popular premium business software like the “Office” essentials such as Word, Excel and PowerPoint.
At least those of you who buy a computer with Windows 10 Fall (Autumn) Creators Update in situ don’t have to miss that basic Paint app because it’s not delivered “out of the box”. Rather they can raid the Windows Store and find this app.
But could this be the path for evergreen software that was always distributed for free as a standalone package or with operating systems like graphics or sound editors by the major operating-system vendors?
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.