Computer Software Archive

Don’t forget that the Browser Choice Screen is your one-stop Web browser port-of-call

Previous Coverage – HomeNetworking01.info

Understanding The Browser Choice Screen (EN, FR)

Web site

Browser Choice Screen – http://browserchoice.eu

My Comments

Previously, I have covered the Browser Choice Screen, which was par of Microsoft’s anti-trust settlement with the European Commission concerning Internet Explorer. This was to be for consumer and small-business Windows setups in the European Union where people were to be offered a choice of Web browser for their operating environment.

But I still see this menu Web page as a “one-stop” port-of-call for people anywhere in the world who want to install new Web browsers or repair a damaged Web-browser installation. This resource came in handy when I was repairing a houseguest’s computer that was damaged by a “system-repair” Trojan Horse. Here, I could know where to go to collect the installation files for the Firefox Web browser that I was to repair so I can restore their Web environment.

If you are creating a system-repair toolkit on a USB memory key, you may visit this resource to download installation packages for the Web browsers to that memory key. Or you can create a shortcut file to this site and store it on the memory key b

Send to Kindle

ARM-based microarchitecture — now a game-changer for general-purpose computing

Article:

ARM The Next Big Thing In Personal Computing | eHomeUpgrade

My comments

I have previously mentioned about NVIDIA developing an ARM-based CPU/GPU chipset and have noticed that this class of RISC chipset is about to resurface in the desktop and laptop computer scene.

What is ARM and how it came about

Initially, Acorn, a British computer company well known for the BBC Model B computer which was used as part of the BBC’s computer-education program in the UK, had pushed on with a RISC processor-based computer in the late 1980s. This became a disaster due to the dominance of the IBM-PC and Apple Macintosh computer platforms as general-purpose computing platforms; even though Acorn were trying to push the computer as a multimedia computer for the classroom. This is although the Apple Macintosh and the Commodore Amiga, which were the multimedia computer platforms of that time, were based on Motorola RISC processors.

Luckily they didn’t give up on the RISC microprocessor and had this class of processor pushed into dedicated-purpose computer setups like set-top boxes, games consoles, mobile phones and PDAs. This chipset and class of microarchitecture became known as the ARM (Acorn RISC Microprocessor) chipset.

The benefit of these RISC (Reduced Instruction Set Computing) class of microarchitecture was to achieve an efficient instruction set that suited the task-intensive requirements that graphics-rich multimedia computing offered; compared to the CISC (Complex Instruction Set Computing) microarchitecture that was practised primarily with Intel 80×86-based chipsets.

There was reduced interest in the RISC chipset due to Motorola pulling out of the processor game since the mid 2000s when they ceased manufacturing the PowerPC processors. Here, Apple had to build the Macintosh platform for the Intel Architecture because this was offering RISC performance at a cheaper cost to Apple; and started selling Intel-based Macintosh computers.

How is this coming about

An increasing number of processor makers who have made ARM-based microprocessors have pushed for these processors to return to general-purpose computing as a way of achieving power-efficient highly-capable computer systems.

This has come along with Microsoft offering a Windows build for the ARM microarchitecture as well as for the Intel microarchitecture. Similarly, Apple bought out a chipset designer when developed ARM-based chipsets.

What will this mean for software development

There will be a requirement for software to be built for the ARM microarchitecture as well as for the Intel microarchitecture because these work on totally different instruction sets. This may be easier for Apple and Macintosh software developers because when the Intel-based Macintosh computers came along, they had to work out a way of packaging software for the PowerPC and the Intel processor families. Apple marketed these software builds as being “Universal” software builds because of the need to suit the two main processor types.

Windows developers will be needing to head down this same path, especially if they work with orthodox code where they fully compile the programs to machine code themselves. This may not be as limiting for people who work with managed code like the Microsoft .NET platform because the runtime packages could just be prepared for the instruction set that the host computer uses.

Of course, Java programmers won’t need to face this challenge due to the language being designed around a “build once run anywhere” scheme with “virtual machines” that work between the computer and the compiled Java code.

For the consumer

This may require that people who run desktop or laptop computers that use ARM processors will need to look for packaged software or downloadable software that is distributed as an ARM build rather than for Intel processors. This may be made easier through the use of “universal” packages that are part of the software distribution requirement.

It may not worry people who run Java or similar programs because Oracle and others who stand behind these programming environments will be needing to port the runtime environments to these ARM systems.

Conclusion

This has certainly shown that the technology behind the chipsets that powered the computing environments that were considered more exciting through the late 1980s are now relevant in today’s computing life. These will even provide a competitive development field for the next generation of computer systems.

Next Windows to have ARM build as well as Intel build. Apple,used to delivering MacOS X for Motorola PowerPC RISC as well as Intel CPUs, to implement Apple ARM processors on Macintosh laptops.

Send to Kindle

People-tagging of photos–a valuable aid for dementia sufferers

Facebook started it. Windows Live Photo Gallery has implemented it since the 2010 version and made it easier with the 2011 version.

What is people-tagging

The feature I am talking about here is the ability to attach a metadata tag that identifies a particular person that appear in a digital image. These implementations typically have the tag applied to a specific area of the photo, usually defining the face or head of the person concerned. It will also become available in current or up-and-coming versions of other image-management programs, photo-sharing services, DLNA media servers and the like.

In the case of DLNA media servers, one of these programs could scan an image library and make a UPnP AV content-directory “tree” based on the people featured in one’s photo library.

Initially the concept, especially the Facebook implementation, was treated with fear and scorn because of privacy invasion. This is because this implementation allows the metadata to be related to particular Facebook Friends and also allows the photo to be commented on by other Facebook Friends. Now the Windows Live Photo Gallery application attaches this metadata in a standardised XML form to the JPEG file like it does with the description tags and geotags. There is the ability to make a copy of this file without the metadata for use in posting to Internet services.

A relevant implementation idea

One key benefit that I would see with this data when implemented with electronic picture frames, HDTVs and similar devices is the ability to overlay the tags over the picture when it is shown. This could be achieved by the user pressing a “display” or similar button on the device or its remote control. Devices with touchscreens, stylus-operated tablet screens or other pointer-driven “absolute” navigation setups could support a function that shows a “people tag” as you touch areas of the image.

Benefit to Alzheimers sufferers

Here, this feature could help people who suffer from Alzheimer’s or other dementia-related illnesses by helping them remember whom their family members or friends are. If the user is using an image-management program or DLNA media-server setup capable of using these tags, they can call up a collection of images of the person they think of and have those images appearing on the screen. If the device has a communications-terminal function like a telephone, one of the images can be used as an index image to remember the correspondent by. This function could be extended by the use of an automatically-updated index image or a screenshow that shows “key” images of the person.

Improving on the idea

To make this work, there needs to be an industry standard that defines how the people-tag metadata is stored on the JPEG file. As well, the standard has to support functions like one or more separate “nickname” fields for each of the people that can be displayed as an option.  This is because a person may be known to one or more other people via a nickname or relative-shortcut name (Mummy, Daddy, Nonna, etc).

Another issue is to encourage users to establish consistency whenever they tag up a collection of images. This could be achieved through “batch-tagging” and / or improved facial recognition in image-management tools. This may be an issue if two or more people are tagging images from their own collections to serve a third collection and they know the people via different names.

Conclusion

Once we cut through the hysteria surrounding people-tagging with digital images and focus on using it as part of desktop image-management systems rather than social networks, we can then see it as a tool for helping people remember whom their loved ones are.

Send to Kindle

A feature that PowerPoint and other presentation software need – improvements for creating video and related works

Introduction

Most of us who use Microsoft PowerPoint or most other business presentation software often want to use the software to make TV-quality title and graphics slides for video productions that we create with other video software, usually the software that is considered to be affordable for most users. This also includes preparing menu trees for DVD and Blu-Ray projects that are being built with affordable software. These needs will become more common as people use affordable video equipment to prepare video material as a way of augmenting their blogs, presenting on YouTube or even exhibiting through community television broadcasters.

As well, an increasing amount of affordable consumer video playback devices such as DVD players, TVs, electronic picture frames and network media players are capable of showing JPEG images, Now many users want to be able to push these commonly-available devices in to service as cost-effective “digital signage”. This is something I have talked about in my article on using DLNA-enabled equipment in the small business.

User-determined bitmap-export resolution

Most of this software doesn’t provide a way of allowing the user to have control over the resolution of the JPEG or other bitmap images that they create when the export the slides to these formats. This is a feature that I would consider being very important as I know that the presentation programs keep the graphics for each of the slides as a vector format which is drawn on the screen rather than a “raster” format which is an array of pixels. This then allows a user of these programs to make the aforementioned “TV-quality” graphics using them no matter the size of their screen.

One common situation where the user may need to adjust the resolution when exporting to JPEG is to prepare quick-loading images that are in small files for use on a device with a small display. One obvious example would be a low-end electronic picture frame which would have  a small display size and another would typically be a mobile phone or portable media player with less than VGA resolution.

Another situation would eventuate in the form of a person who uses a laptop or small desktop screen with a low resolution display to create a presentation. Then they want to export the JPEG files to a playback situation capable of handling high-resolution images like a BD-Live Blu-Ray player connected via HDMI to a large direct-view screen or a projector. Similarly, the images could be used as part of a high-definition video production and there is the desire for that high-definition “crispness” in the images.

The user could be presented with a series of resolutions for the JPEG exports with these resolutions conforming to the aspect ratio for the presentations as part of exporting the images. As well, there could be the support for users to set the default image resolutions for particular aspect ratios and presentation types. The function could be simplified by use of an “SD” option for standard-definition output, an “HD1” option for 720-line high-definition output and an “HD2” option for 1080-line high-definition output.

Improved “export-to-video” and video integration

Another function worth considering would be to provide “export-to-video” functionality for animated presentations so one can make the presentations out as regular SD or HD video files with a choice of common codecs and packaging methods.

As well, in the case of Microsoft PowerPoint, this program could have integrated functionality with Windows Live Movie Maker. This free program, which is the only video-editing program that Microsoft sells, could support such functionality as “create slide or animation in PowerPoint” so that users can prepare slides in PowerPoint then turn them in to video content using this program.

Conclusion

These kind of improvements can allow users to put business presentation software to use in improving the quality of the video or “digital signage” they create with other affordable tools.

Send to Kindle