Computer Software Archive

Alfaview brings forth a German competitor to the world of videoconferencing

Article – German Language / Deutsche Sprache

Flag of Germany

Germany now yields its own videoconferencing platform

Alfaview: Sichere Videochat-Software aus Deutschland (Alfaview : Secure Videochat Software from Germany) | Computer Bild

From the horse’s mouth

Alfaview

Home Page (English / Deutsch)

My Comments

A German company has fielded a videoconferencing packaging which is Europe’s answer to what Zoom, Skype and Microsoft Teams is about. This is part of a variety of efforts by European governments and businesses to create credible mainstream IT service alternatives to what the USA and China are offering while respecting European values. One example is efforts by Germany to create a public data-processing cloud that is within that country’s borders as part of leading an effort towards a Europe-wide public cloud.

Alfaview screenshot press image of Alfaview

This is in the form of Alfaview which provides a Zoom-style experience

This company, Alfatraining Bildungszentrum GmbH which is based in Karlsruhe, Baden-Württenburg, Germany, has released the Alfaview video-conferencing platform. Here, this platform places privacy and European sovereignty first in the way it is engineered.

The Alfaview platform’s servers are based in Germany and the company heavily underscores the spirit of European values especially with the GDPR directive. Videoconferencing data is encrypted using TLS/AES256 protocols during conversations. But they can allow the use of non-German services as long as they are in the EU, again underscoring European values. There will also be the ability for people to join the platform from all over the world, thus avoiding a problem with European technologies and services where they have limited useability from areas beyond Europe.

As well, it answers the weaknesses that are associated with the videoconferencing establishment when it comes to offering this kind of service for consumers and small businesses. This encompasses Zoom not being all that secure, Microsoft not maintaining Skype and focusing the Teams videoconferencing package just for big business. As well, Facebook who has come on the bandwagon with Messenger Rooms is not all that respected when it comes to security and privacy.

Alfaview runs natively on Windows, MacOS, Linux (Debian package), iOS and will soon be ported for Android. But they could simply reuse the Linux package as a code base for reaching out to ChromeOS and Android platforms. As well, I am not sure if the iOS version is optimised for the iPads which is something I consider of importance for mobile platforms that have tablet devices because these devices have a strong appeal to multi-party video conferences.

There is a free package for individuals and families to use which provides for one room that has 50 participants. As well, Alfaview has a Free Plus package pitched towards the education and non-profit sector. Here, this one has most of the features that the corporate package has like 40 rooms per account with 50 participants. There is also the ability to run 10 concurrent breakout groups per room.

This is in conjunction to various paid plans for ordinary businesses to buy in to for their videoconferencing needs. Alfaview even provides the ability to offer the software in a “white-label” form for companies to brand themselves.

But what I see of the Alfaview approach is that the Europeans are offering a Zoom-style service respecting their values and competing with what the Silicon Valley establishment are offering.

Send to Kindle

Do I see regular computing target i86 and ARM microarchitectures?

Lenovo Yoga 5G convertible notebook press image courtesy of Lenovo

Lenovo Flex 5G / Yoga 5G convertible notebook which runs Windows on Qualcomm ARM silicon – the first laptop computer to have 5G mobile broadband on board

Increasingly, regular computers are moving towards the idea of having processor power based around either classic Intel (i86/i64) or ARM RISC microarchitectures. This is being driven by the idea of portable computers heading towards the latter microarchitecture as a power-efficiency measure with this concept driven by its success with smartphones and tablets.

It is undertaking a different approach to designing silicon, especially RISC-based silicon, where different entities are involved in design and manufacturing. Previously, Motorola was taking the same approach as Intel and other silicon vendors to designing and manufacturing their desktop-computing CPUs and graphics infrastructure. Now ARM have taken the approach of designing the microarchitecture themselves and other entities like Samsung and Qualcomm designing and fabricating the exact silicon for their devices.

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

Apple to move the Macintosh platform to their own ARM RISC silicon

A key driver of this is Microsoft with their Always Connected PC initiative which uses Qualcomm ARM silicon similar to what is used in a smartphone or tablet. This is to have the computer able to work on basic productivity tasks for a whole day without needing to be on AC power. Then Apple intended to pull away from Intel and use their own ARM-based silicon for their Macintosh regular computers, a symptom of them going back to the platform’s RISC roots but not in a monolithic manner.

As well, the Linux community have established Linux-based operating systems on ARM microarchitectore. This has led to Google running Android on ARM-based mobile and set-top devices and offering a Chromebook that uses ARM silicon; along with Apple implementing it in their operating systems. Not to mention the many NAS devices and other home-network hardware that implement ARM silicon.

Initially the RISC-based computing approach was about more sophisticated use cases like multimedia or “workstation-class” computing compared to basic word-processing and allied computing tasks. Think of the early Apple Macintosh computers, the Commodore Amiga with its many “demos” and games, or the RISC/UNIX workstations like the Sun SPARCStation that existed in the late 80s and early 90s. Now it is about power and thermal efficiency for a wide range of computing tasks, especially where portable or low-profile devices are concerned.

Software development

Already mobile and set-top devices use ARM silicon

I will see an expectation for computer operating systems and application software to be written and compiled for both classic Intel i86 and ARM RISC microarchitectures.  This will require software development tools to support compiling and debugging on both platforms and, perhaps, microarchitecture-agnostic application-programming approaches.  It is also driven by the use of ARM RISC microarchitecture on mobile and set-top/connected-TV computing environments with a desire to allow software developers to have software that is useable across all computing environments.

WD MyCloud EX4100 NAS press image courtesy of Western Digital

.. as do a significant number of NAS units like this WD MyCloud EX4100 NAS

Some software developers, usually small-time or bespoke-solution developers, will end up using “managed” software development environments like Microsoft’s .NET Framework or Java. These will allow the programmer to turn out a machine-executable file that is dependent on pre-installed run-time elements for it to run. These run-time elements will be installed in a manner that is specific to the host computer’s microarchitecture and make use of the host computer’s needs and capabilities. These environments may allow the software developer to “write once run anywhere” without knowing if the computer  the software is to run on uses an i86 or ARM microarchitecture.

There may also be an approach towards “one-machine two instruction-sets” software development environments to facilitate this kind of development where the goal is to simply turn out a fully-compiled executable file for both instruction sets.

It could be in an accepted form like run-time emulation or machine-code translation as what is used to allow MacOS or Windows to run extant software written for different microarchitectures. Or one may have to look at what went on with some early computer platforms like the Apple II where the use of a user-installable co-processor card with the required CPU would allow the computer to run software for another microarchitecture and platform.

Computer Hardware Vendors

For computer hardware vendors, there will be an expectation towards positioning ARM-based silicon towards high-performance power-efficient computing. This may be about highly-capable laptops that can do a wide range of computing tasks without running out of battery power too soon. Or “all-in-one” and low-profile desktop computers will gain increased legitimacy when it comes to high-performance computing while maintaining the svelte looks.

Personally, if ARM-based computing was to gain significant traction, it may have to be about Microsoft encouraging silicon vendors other than Qualcomm to offer ARM-based CPUs and graphics processors fit for “regular” computers. As well, Microsoft and the Linux community may have to look towards legitimising “performance-class” computing tasks like “core” gaming and workstation-class computing on that microarchitecture.

There may be the idea of using 64-bit i86 microarchitecture as a solution for focused high-performance work. This may be due to a large amount of high-performance software code written to run with the classic Intel and AMD silicon. It will most likely exist until a significant amount of high-performance software is written to run natively with ARM silicon.

Conclusion

Thanks to Apple and Microsoft heading towards ARM RISC microarchitecture, the computer hardware and software community will have to look at working with two different microarchitectures especially when it comes to regular computers.

Send to Kindle

Apple to use the ARM microarchitecture in newer Mac computers

Article

Apple MacBook Pro running MacOS X Mavericks - press picture courtesy of Apple

The Apple Mac platform is to move towards apple’s own silicon that uses ARM RISC microarchitecture

It’s Official: The Mac Is Transitioning to Apple-Made Silicon | Gizmodo

My Comments

This week, Apple used its WWDC software developers’ conference to announce that the Macintosh regular-computer platform will move away from Intel’s silicon to their own ARM-based silicon. This is to bring that computing platform in to line with their iOS/iPadOS mobile computing platform, their tVOS Apple TV set-top platform and their Watch platform that uses Apple’s own silicon.

Here, this silicon will use the ARM RISC instruction-set microarchitecture rather than the x86/x64 architecture used with Intel silicon. But Apple is no stranger to moving the Macintosh computing platform between microarchitectures.

Initially this platform used Motorola 680×0/PowerPC silicon which used a Motorola RISC instruction set microarchitecture. This platform initially had more chops compared to Intel’s x86 platform especially when it came to graphics and multimedia. Then, when Apple realised that Intel offered cost-effective microprocessors using the x86-64 microarchitecture and had the same kind of multimedia prowess as the Motorola processors, they moved the Macintosh platform to the Intel silicon.

But Apple had to take initiatives to bring the MacOS and Mac application software over to this platform. This required them to supply software development tools to the software-development community to allow programs that they write to be compiled for both Motorola and Intel instruction sets. They also furnished an instruction-set translator or “cross-compiler” called Rosetta to Mac users who had Intel-based Macs so they can run extant software that was written for Motorola silicon.

For a few years, this caused some awkwardness with Mac users, especially those who were early adopters, due to either the availability of software natively compiled for Intel silicon. Or they were finding that their existing Motorola-native software was running too slowly on their Intel-based computers thanks to the Rosetta instruction-set-translation software working between their program and the computer’s silicon.

Apple will be repeating this process in a very similar way to the initial Intel transition by the provision of software-development tools that build for Intel i86-64 based silicon and their own ARM-RISC based silicon. As well they will issue Rosetta2 which does the same job as the original Rosetta but translate i86-64 CISC machine instructions to the ARM RISC instruction set that their own silicon uses. Rosetta2 will be part of the next major version of MacOS which will be known as Big Sur.

The question that will be raised amongst developers and users of high-resource-load software like games or engineering software is what impact this conversion will have on that level of software. Typically most games are issued for the main games consoles and Windows-driven Intel-architecture PCs over Macs or tvOS-based Apple TV set-top devices, with software ports for these platforms coming later on in the software’s evolution.

There is an expectation that the Rosetta2 “cross-compiler” software could work this kind of software properly to a point that it can satisfactorily perform on a computer using integrated graphics infrastructure and working at Full HD resolution. Then there will be the issue of making sure it works with a Mac that uses discrete graphics infrastructure and higher display resolutions, thus giving the MacOS platform some “gaming chops”.

I see the rise of ARM RISC silicon in the tradition regular computing world and having it exist alongside classic Intel-based silicon in this computing space like what is happening with Apple and Microsoft as a challenge for computer software development. It is although some work has taken place within the UNIX / Linux space to facilitate the development of software for multiple computer types thus leading to this space bringing forth the open-source and shared-source software movements. This is more so with Microsoft where there is an expectation to have Intel-based silicon and ARM-based silicon exist alongside each other for the life of a common desktop computing platform, with each silicon type serving particular use cases.

Send to Kindle

Intel to make graphics driver updates independent of PC manufacturer customisations

Article

Dell XPS 13 Kaby Lake

Laptops with Intel graphics infrastructure like this Dell XPS 13 will benefit from having any manufacturer-specific customisations to the graphics driver software delivered as a separate item from that drive code

Intel graphics drivers can now be updated separately from OEM customizations | Windows Central

From the horse’s mouth

Intel

Intel Graphics – Windows 10 DCH drivers (Latest download site)

My Comments

Intel is now taking a different approach to packaging the necessary Windows driver software for its graphics infrastructure. This will affect any of us who have Intel graphics infrastructure in our computers, including those of us who have Intel integrated-graphics chipsets working alongside third-party discrete graphics infrastructure in our laptops as an energy-saving measure.

Previously, computer or motherboard manufacturers who wanted to apply any customisations to their Intel integrated-graphics driver software for their products had to package the customisations with the driver software as a single entity. Typically it was to allow the computer manufacturer to optimise the software for their systems or introduce extra display-focused features peculiar to their product range.

Dell Inspiron 15 Gaming laptop

.. even if the Intel graphics architecture is used as a “lean-burn” option for high-performance machines like this Dell Inspiron 15 7000 Gaming laptop when they are run on battery power

This caused problems for those of us who wanted to keep the driver software up-to-date to get the best out of the integrated graphics infrastructure in our Intel-based laptops.

If you wanted to benefit from the manufacturer-supplied software customisations, you had to go to the manufacturer’s software-support Website to download the latest drivers which would have your machine’s specific customisations.

Here, the latest version of the customised drivers may be out-of-step with the latest graphics-driver updates offered by Intel at its Website and if you use Intel’s driver packages, you may not benefit from the customisations your machine’s manufacturer offered.

The different approach Intel is using is to have the graphics driver and the customisations specific to your computer delivered as separate software packages.

Here, Intel will be responsible for maintaining their graphics-driver software as a separate generic package which will have API “hooks” for any manufacturer-specific customisation or optimisation code to use. Users can pick this up from the Intel driver-update download site, the manufacturer’s software update site or Windows Update. Then the computer manufacturer will be responsible for maintaining the software peculiar to their customisations and offering the updates for that software via their support / downloads Website or Microsoft’s Windows Update.

It may be seen as a two-step process if you are using Intel’s and your computer manufacturer’s Websites or software-update apps for this purpose. On the other hand, if you rely on Windows Update as your driver-update path, this process would be simplified.

The issue of providing computer-specific customisations for software drivers associated with computer hardware subsystems will end up being revised after Intel’s effort. This will be more so with sound subsystems for those laptops that have their audio tuned by a name of respect in the audio industry, or common network chipsets implemented in a manufacturer-peculiar manner.

At least you can have your cake and eat it when it comes to running the latest graphics drivers on your Intel-based integrated-graphics-equipped laptop.

Send to Kindle

What do I mean by a native client for an online service?

Facebook Messenger Windows 10 native client

Facebook Messenger – now native on Windows 10

With the increasing number of online services including cloud and “as-a-service” computing arrangements, there has to be a way for users to gain access to these services.

Previously, the common way was to use a Web-based user interface where the user has to run a Web browser to gain access to the online service. The user will see the Web browser’s interface elements, also known as “chrome” as part of the user experience. This used to be very limiting when it came to functionality but various revisions have allowed for a similar kind of functionality to regular apps.

Dropbox native client view for Windows 8 Desktop

Dropbox native client view for Windows 8 Desktop- taking advantage of what Windows Explorer offers

A variant on this theme is a “Web app” which provides a user interface without the Web browser’s interface elements. But the Web browser works as an interpreter between the online service and the user interface although the user doesn’t see it as that. It is appealing as an approach to writing online service clients due to the idea of “write once run anywhere”.

Another common approach is to write an app that is native to a particular computing platform and operating system. These apps, which I describe as “native clients” are totally optimised in performance and functionality for that computing platform. This is because there isn’t the need for overhead associated with a Web browser needing to interpret code associated with a Web page. As well, the software developer can take advantage of what the computing platform and operating system offers even before the Web browser developers build support for the function in to their products.

There are some good examples of online-service native clients having an advantage over Web apps or Web pages. One of these is messaging and communications software. Here, a user may want to use an instant-messaging program to communicate with their friends or colleagues while using graphics software or games which place demands on the computer. Here, a native instant-messaging client can run alongside the high-demand program without the overhead associated with a Web browser.

The same situation can apply to online games where players can see a perceived improvement in their performance. As well, it is easier for the software developer to write them to take advantage of higher-performance processing silicon. It includes designing an online game for a portable computing platform that optimises itself for either external power or battery power.

This brings me to native-client apps that are designed for a particular computing platform from the outset. One key application is to provide a user interface that is optimised for “lean-hack” operation, something that is demanded of anything that is about TV and video content. The goal often is to support a large screen viewed at a distance along with the user navigating the interface using a remote control that has a D-pad and, perhaps a numeric keypad. The remote control describes the primary kind of user interface that most smart TVs and set-top boxes offer.

Another example is software that is written to work with online file-storage services. Here, a native client for these services can be written to expose your files at the online file-storage service as if it is another file collection similar to your computer’s hard disk or removeable medium.

Let’s not forget that native-client apps can be designed to make best use of application-specific peripherals due to them directly working with the operating system. This can work well with setups that demand the use of application-specific peripherals, including the so-called “app-cessory” setups for various devices.

When an online platform is being developed, the client software developers shouldn’t forget the idea of creating native client software that works tightly with the host computer platform.

Send to Kindle

What is the new HEIF image file format about?

Apple iPad Pro 9.7 inch press picture courtesy of Apple

An iPhone or iPad running iOS 11 has native support for the HEIF image file format

A new image file format has started to surface over the last few years but it is based on a different approach to storing image files.

This file format, known as the HEIF or High Efficiency Image Format, is designed and managed by the MPEG group who have defined a lot of commonly-used video formats. It is being seen by some as the “still-image version of HEVC” with HEVC being the video codec used for 4K UHDTV video files. But it uses HEVC as the preferred codec for image files and will provide support for newer and better image codecs, including newer codecs offering lossless image compression in a similar vein to what FLAC offers for audio files.

Unlike JPEG and the other image files that have existed before it, HEIF is seen as a “container file” for multiple image and other objects rather than just one image file and some associated metadata. As well, the HEIF file format and the HEVC codec are designed to take advantage of today’s powerful computing hardware integrated in our devices.

The primary rule here for HEIF is that it is a container file format speci

Simple concept view of the HEIF image file format

fically for collections of still images. It is not really about replacing one of the video container file formats like MP4 or MKV which are specifically used primarily for video footage.

What will this mean?

One HEIF file could contain a collection of image files such as “mapping images” to improve image playback in certain situations. It can also contain the images taken during a “burst” shot where the camera takes a rapid sequence of images. This can also apply with image bracketing where you take a sequence of shots at different exposure, focus or other camera settings to identify an ideal image setup or create an advanced composite photograph.

This leads to HEIF dumping GIF as the carrier for animated images that are often provided on the Web. Here, you could use software to identify a sequence of images to be played like a video, including having them repeat in a loop thanks to the addition of in-file metadata. This emulates what the Apple Live Photos function was about with iOS and can allow users to create high-quality animations, cinemagraphs (still photos with a small discreet looping animation) or slide-shows in the one HEIF file.

HEIF uses the same codec associated with 4K UHDTV for digital photos

There is also the ability to store non-image objects like text, audio or video in an HEIF file along with the images which can lead to a lot of things. For example, you could set a smartphone to take a still and short video clip at the same time like with Apple Live Photos or you could have images with text or audio notes. On the other hand, you could attach “stamps”, text and emoji to a series of photos that will be sent as an email or message like what is often done with the “stories” features in some of the social networks. In some ways it could be seen as a way to package vector-graphics images with a “compatibility” or “preview” bitmap image.

The HEIF format will also support non-destructive metadata-based editing where this editing is carried out using metadata that describes rectangular crops or image rotations. This is so you could revise an edit at a later time or obtain another edit from the same master image.

It also leads to the use of “derived images” which are the results of one of these edits or image compositions like an HDR or stitched-together panorama image. These can be generated at the time the file is opened or can be created by the editing or image management software and inserted in the HEIF file with the original images. Such a concept could also extend to the rendering and creation of a video object that is inserted in the HEIF file.

HEIF makes better use of advanced photo options offered by advanced cameras

Here, having a derived image or video inserted in the HEIF file can benefit situations where the playback or server setup doesn’t have enough computing power to create an edit or composition of acceptable quality in an acceptable timeframe. Similarly, it could answer situations where the software used either at the production/editing, serving or playback devices does a superlative job of rendering the edits or compositions.

The file format even provides alternative viewing options for the same resource. For example, a user could see a wide-angle panorama image based on a series of shots as either a wide-aspect-ration image or a looping video sequence representing the camera panning across the scene.

What kind of support exists for the HEIF format?

At the moment, Apple’s iOS 11, tvOS 11 (Apple TV) and MacOS High Sierra provide native support for the HEIF format. The new iPhone and iPad provide hardware-level support for the HEVC codec that is part of this format and the iOS 11 platform along with the iCloud service provides inherent exporting of these images for email and other services not implementing this format.

Microsoft is intending to integrate HEIF in to Windows 10 from the Spring Creators Update onwards. As well, Google is intending to bake it in to the “P” version of Android which is their next feature version of that mobile platform.

As for dedicated devices like digital cameras, TVs and printers; there isn’t any native support for HEIF due to it being a relatively new image format. But most likely this will come about through newer devices or devices acquiring newer software.

Let’s not forget that newer image player and editing / management software that is continually maintained will be able to work with these files. The various online services like Dropbox, Apple iCloud or Facebook are or will be offering differing levels of HEIF image-file support the the evolution of their platform. Depending on the service, this will be to hold the files “as-is” or to export and display them in a “best-case” format.

There will be some compatibility issues with hardware and software that doesn’t support this format. This may be rectified with operating systems, image editing / management software or online services that offer HEIF file-conversion abilities. Such software will need to export particular images or derivative images in an HEIF file as a JPEG image for stills or a GIF, MP4 or QuickTime MOV file for timed sequences, animations and similar material.

In the context of DLNA-based media servers, it may be about a similar approach to an online service where the media server has to be able to export original or derived images from one of these files held on a NAS as a JPEG still image or a GIF or MP4 video file where appropriate.

Conclusion

As the container-based HEIF image format comes on to the scene as the possible replacement for JPEG and GIF image files, it could be about an image file format that shows promise for both the snapshooter or the advanced photographer.

Send to Kindle

Microsoft to improve user experience and battery runtime for mobile gaming

Article – From the horse’s mouth

Candy Crush Saga gameplay screen Android

Microsoft researching how games like Candy Crush Saga can work with full enjoyment but not demanding much power

Microsoft Research

RAVEN: Reducing Power Consumption of Mobile Games without Compromising User Experience (Blog Post)

My Comments

A common frustration that we all face when we play video games on a laptop, tablet or smartphone is that these devices run out of battery power after a relatively short amount of playing time. It doesn’t matter whether we use a mobile-optimised graphics infrastructure like what the iPad or our smartphones are equipped with, or a desktop-grade graphics infrastructure like the discrete or integrated graphics chipsets that laptops are kitted out with.

What typically happens in gameplay is that the graphics infrastructure paints multiple frames to create the illusion of movement. But most games tend to show static images for a long time, usually while we are planning the next move in the game. In a lot of cases, some of these situations may use a relatively small area where animation takes place be it to show a move taking place or to show a “barberpole” animation which is a looping animation that exists for effect when no activity takes place.

Microsoft is working on an approach for “painting” the interaction screens in a game so as to avoid the CPU and graphics infrastructure devoting too much effort towards this task. This is a goal to allow a game to be played without consuming too much power and takes advantage of human visual perception for scaling frames needed to make an animation. There is also the concept of predictability for interpreting subsequent animations.

But a lot of the theory behind the research is very similar to how most video-compression codecs and techniques work. Here, these codecs use a “base” frame that acts as a reference and data that describes the animation that takes place relative to that base frame. Then during playback or reception, the software reconstructs the subsequent frames to make the animations that we see.

The research is mainly about an energy-efficient approach to measuring these perceptual differences during interactive gameplay based on the luminance component of a video image. Here, the luminance component of a video image would be equivalent to what you would have seen on a black-and-white TV. This therefore can be assessed without needing to place heavy power demands on the computer’s processing infrastructure.

The knowledge can then be used for designing graphics-handling software for games that are to be played on battery-powered devices, or to allow a “dual-power” approach for Windows, MacOS and Linux games. This is where a game can show a detailed animation with high performance on a laptop connected to AC power but allow it not to show that detailed animation while the computer is on battery power.

Send to Kindle

Microsoft Paint–here to stay but available down another path

Articles

Windows Paint – here to stay but will be available through Windows Store

Microsoft Paint isn’t dead yet, will live in the Windows Store for free | The Verge

Classic MS Paint is coming to the Windows Store, for FREE! | Windows Central

From the horse’s mouth

Microsoft

Windows Experience Blog

My Comments

Recently the computer press has been awash with articles that Microsoft was killing the Paint app that always came with Windows since 1.0 . But they are keeping it available for Windows users to continue working with by allowing them to download it for free from the Windows Store.

The Paint app was simply a basic bitmap-driven graphics editor that allowed users to get used to using a mouse for creating computer graphics. It was based on ZSoft’s PC Paintbrush which was the PC’s answer to the various baseline graphics editors that came with every mouse-driven graphical user interface since 1984 when that kind of computing came on board with the Apple Macintosh.

This app ended up being the answer for any basic computer-graphics work at home or in the office, whether it be children creating computer drawings through to designers creating rough prototypical images of what they are designing in the office. I have infact used Paint as part of creating screenshots for this Website by editing the various screenshots whether to redact private information or to call out particular menu options that I am talking about in the accompanying article. This was thanks to an easier learning curve that this software implemented from Day 1.

A common fear that I would have expressed in relation to the press coverage about Microsoft abandoning or paying less attention to Paint and other bundled or cost-effective graphics tools (remember PhotoDraw?) is that they could end up stripping down their application-software portfolio of titles seen to be less valuable. Then they would just focus their efforts on the popular premium business software like the “Office” essentials such as Word, Excel and PowerPoint.

At least those of you who buy a computer with Windows 10 Fall (Autumn) Creators Update in situ don’t have to miss that basic Paint app because it’s not delivered “out of the box”. Rather they can raid the Windows Store and find this app.

But could this be the path for evergreen software that was always distributed for free as a standalone package or with operating systems like graphics or sound editors by the major operating-system vendors?

Send to Kindle

Two ways to put indie games on the map

Article

GOG Galaxy client app (Windows)

One of the indie games markets out there

Calling all indie developers in the US & Canada: sign up for the Google Play Indie Games Festival in San Francisco | Google Android Developers Blog

Google Play announces 2017 Indie Games Festival for US and Canadian developers | 9to5 Google

Jump aims to be “Netflix for indie games” while still benefiting developers | KitGuru

Previous coverage

Competition arises for the online games storefront

My Comments

An issue that may face a games developer who wants to work the independent path is how they can put the game on the map as far as the public is concerned. For some developers, the importance is about avoiding heading to the “Hollywood of electronic games” where computer games become very similar in quality to the Hollywood blockbuster movies or popular American network and cable TV shows which ends up with the same-old content amid questionable ethics.

Google Play Store

Google Play Store – a step towards the indie game market

One way would be to put the games on one or more platform-specific app stores like the iTunes App Store, Google Play or Microsoft Store. On the other hand, if the game is to be targeted at regular computers, it may be about offering the games to indie-focused software markets like GOG Galaxy or the upcoming Jump subscription-driven market.

The second option appeals to those who want to keep it purely without mainstream influence, as if it is like offering indie-music records through the inner-urban record stores and having it played on community radio stations or by venues visited by the target audience like the cool cafes. It may also include making the games downloadable through the developer’s own Website. But it may only appeal for Windows, Linux or Mac regular-computing platforms rather than mobile devices or consoles and set-top devices.

But Google has taken a process similar to a mainstream full-line music outlet running and promoting an “indie” genre. Here, they are running the “Indie Games Festival” in San Francisco to draw out indie games developers and have them offer the Android platform the best software they can provide. What I see of this is it is a way to stimulate the indie games market especially courting those developers that are writing for mobile platforms.

I even see the Microsoft Store in a better position to court the indie games developer who doesn’t mind going down the second path by encouraging them to develop games for the Windows platform and the XBox One console in a “write-once” approach. Here, they could take a similar approach to Google by running a dedicated festival for indie games developers who want to approach regular-computer and games-console platforms.

At the moment, when it comes to the games for the regular computer, being able to use a dedicated indie-game market or using an established games market like Microsoft Store, iTunes Mac App Store or Steam are the viable games options. But when it comes to targeting other devices like mobile devices, games consoles or set-top devices, the only way would be to use the platform-specific app stores and unless the platform encourages quality independent software development, these will be very limiting for the indie games developer.

There still needs to be interest in and support for the indie games market from many app stores and games markets so that electronic gaming is an environment for high-quality electronic games that appeal to all people.

Send to Kindle

Microsoft dropping features from Windows 10 Fall Creators Update

Article

Acer Switch Alpha 12 2-in-1 with keyboard press image courtesy of Acer

There is the risk of over-promising and under-delivering when there is a short time between major operating system updates

Where do we stand with features for the Windows 10 Fall Creators Update? | Supersite Windows

My Comments

An increasing trend for regular-computer and mobile operating systems is for them to be updated on a regular basis along the model of “software-as-a-service”.

With this model, the companies behind these operating systems will typically license the operating system with new hardware, but not require the user to pay to acquire newly-developed functionality. It is in conjunction with making sure that all bugs and security exploits are removed from the system.

A problem that has been found with this method of delivery is that it can be easy to over-promise and under-deliver as what Microsoft commonly does. This has shown up more so with the Fall Creator’s Update of Windows 10 where Microsoft removed Windows Timeline and Cloud-Powered Clipboard, two highly-promised features, from the feature list for that update.

What is underscored here is the frequency of major updates that add significant amounts of functionality to an operating system, along with calling out the promised improvements for these updates. Apple and Google implement a yearly cycle when it comes to delivering major operating-system updates that are about adding extra features while Microsoft undertakes this on a six-monthly basis.

The advantage of the long lead-time that Apple and Google run with is that they can deliver on their promises by writing in the code and subjecting it to a long debug and optimisation cycle including delivering it in publicly-available beta-test packages. This is conversant with Microsoft calling out features for a major functionality update and having to have all of them work to expectation by the time the update is considered “feature complete”.

But how can Microsoft and others who implement the short lead times for major functionality updates avoid the issue of over-promising? Here they could announce that some features are being destined for the upcoming functionality update while others are being destined for the subsequent update.

Similarly, they could deliver the functionality in an “out-of-band” form such as free-to-install apps provided through the platform’s app store, a practice Google is undertaking with the Android platform. In the case of functionality dependent on a particular peripheral class, it may be delivered as part of the onboarding process that the operating system performs when that peripheral is connected for the first time.

Personally, I would like to see some effort put towards fine-tuning the peripheral and network interface software code to encourage more “driver-free” peripheral connectivity that occurs in a secure stable manner to the latest specifications for that device class.

What is being highlighted here is the risk of over-promising and under-delivering when it comes to delivering major functionality updates for an operating system.

Send to Kindle