Category: Computer Software

Using a NAS to hold operating-system updates

The current situation

Netgear ReadyNAS

A network-attached storage can come in handy for storing software updates rather than downloading them frequently

Operating system and application developers are now being required to provide updates for their products during the product’s service life and beyond. This is to provide for a computing environment that is performs in an efficient, secure, reliable and optimum manner. The updates may be released at regular intervals such as on a monthly basis or in response to a situation such as the discovery of a bug or security exploit.

New devices

A common situation that happens with most regular and mobile computing devices when a user takes delivery of them is that the user downloads a large data package to bring it up to date. This may be done many times if multiple units running the same platform are purchased.

Many devices

Similarly, a household may have multiple units running the same operating environment and they have to keep these up to date. The typical example of this may be a family with two or three children who are at secondary school. Here, they may have two or three computers for the children to use as well as one computer per adult. This could be brought about with the older child being given a more powerful computer as they enter senior high school or another computer given to the younger children as they start their secondary school.

But the same bandwidth would be used again and again to update each and every device. This may not be a problem for a couple with one device per adult but would be a problem when you are thinking of environments with more than two devices which is fast becoming the norm.

Using a network-attached storage to locally cache updates

Somehow the network-attached storage devices need to be able to support the ability to locally hold updates and patches for operating systems and applications used in computers on a home or small-business network.

The practice is performed frequently with large-business computer setups because of the number of computers being managed in these setups. But it could be practiced with home and small-business setups using a simplified interface. This could be based on the use of a local-storage application for regular or mobile client operating environments which supports this kind of local updating.

A local client application to manage system-update needs

Here, the local-client software could register which operating environment the host computer runs and what eligible applications are on the system so as to prepare an “update manifest” or “shopping list” for the computer. The “shopping list” would be based on the core name of the software, no matter whether different computers are running different variants of the software, such as home laptops running Windows 7 Home Premium while a work-home laptop runs Windows 7 Professional. This manifest would be updated if new applications are installed, existing applications are removed or changed to different editions or the operating system is upgraded to a different version or edition.

A local software manifest held by the NAS

This manifest is then uploaded to the NAS which runs a server application to regularly check the software developers’ update sites for the latest versions and updates for the programs that exist on the “shopping list”. There could be a “commonality” check that assesses whether particular updates and patches apply across older and newer versions of the same software, which can be true for some Windows patches that apply from Windows XP to Windows 7 with the same code.

At regular intervals, the NAS checks for the updates and downloads them as required. Here, it could be feasible to implement logic the check the updates and patches for malware especially as this update path can be an exploit vector. Then the computers that exist on the network check for new software updates and patches at the NAS.

Software requirements

Such a concept could be implemented at the client with most regular and mobile operating systems and could be implemented on network-attached storage devices that work to a platform that allows software addition.

It would also require software developers who develop the operating systems and application software to provide a level of support for update checking by intermediate devices. Initially this could require setups that are particular to a particular developer being installed on the client device and the NAS, but this could move towards one software update solution across many developers.

A change of mindset

What needs to happen is a change of mindset regarding software distribution in the home and small business. Here, the use of local network storage for software updates doesn’t just suit the big business with more than 50 computers in its fleet.

It could suit the household with two or more children in secondary school or a household with many young adults. Similarly a shop that is growing steadily and acquiring a second POS terminal or a medical practice that is setting up for two or more doctors practising concurrently may want this same ability out of their server or NAS.

Conclusion

The NAS shouldn’t just be considered as a storage device but as a way of saving bandwidth when deploying updates in to a household or small business who has multiple computers on the same platform.

MacOS X users can now consolidate multiple cloud-based notes storage services in one app

Article

Notesdeck Consolidates Evernote, Dropbox And iCloud Notes Into One App | Lifehacker Australia

My Comments

Some of us may start using the cloud-driven notes storage services like Evernote or Dropbox. This is due to the ability for us to gain access to the material we create on these services from any regular or mobile computing device.

But we can be encouraged to create accounts with more than one of these services, such as through a service provider having a presence on our new computers; or our colleagues, relatives or friends recommending a particular service to us. In some cases, we may exploit a particular service as a data pool for a particular project.

Subsequently we end up with multiple “front-end” links to different cloud-based storage services on our computers and end up not knowing where a particular piece of data is held – is it on Dropbox, is it on Evernote, is it on iCloud or whatever.

Now someone has written a MacOS X app that provides the same kind of interface and useability to these cloud-based services that an email client provides to most email services. In the Apple Macintosh context, Apple Mail would be able to allow you to set up multiple email accounts such as your SMTP/POP3 mailbox your ISP gave you, your Exchange account that work gave you as well as your GMail account that you set up as a personal account.

At the moment, the software called NotesDeck, sells for $11.49 but according to the review, there needs to be a few improvements. One that was raised was that entries are listed for services that you aren’t set up with. This is compared to the typical email client that doesn’t list service types that you don’t have presence with. It could be rectified properly if the software could use a provisioning user experience similar to the typical email client where you click on “Add Account” to create details about the mailbox you are integrating to your client. 

Of course, I would like to be sure that this program does allow you to transfer notes between accounts and also between local resources such as your word-processing documents. This may be important if you intend to consolidate your cloud-based notes services towards fewer services or copy the notes out to the magnum opus that you are working on.

Similarly, the program could be ported to the Windows platform or to the mobile platforms (iOS, Android, Windows Phone 8) so that users who use these platforms can have the ability to work the multiple accounts on their devices using one program.

Symantec Symposium 2012–My observations from this event

Introduction

Yesterday, I attended the Symantec Symposium 2012 conference which was a chance to demonstrate the computing technologies Symantec was involved in developing and selling that were becoming important to big business computing.

Relevance to this site’s readership

Most solutions exhibited at this conference are pitched at big business with a fleet of 200 or more computers. But there were resellers and IT contractors at this event who buy these large-quantity solutions to sell on to small-business sites who will typically have ten to 100 computers.

I even raised an issue in one of the breakout sessions about how manageability would be assured in a franchised business model such as most fast-food or service-industry chains. Here, this goal could be achieved through the use of thin-client computers or pre-configured equipment bought or leased through the franchisor.

As well, the issues and solution types of the kind shown at this Symposium tend to cross over between small sites and the “big end of town” just like a lot of office technology including the telephone and the fax machine have done so.

Key issues that were being focused were achieving a secure computing environment, supoorting the BYOD device-management model and the trend towards cloud computing for the systems-support tasks.

Secure computing

As part of the Keynote speech, we had a guest speaker from the Australian Federal Police touch on the realities of cybercrime and how it affects the whole of the computing ecosystem. Like what was raised in the previous interview with Alastair MacGibbon and Brahman Thiyagalingham about secure computing in the cloud-computing environment, the kind of people committing cybercrime is now moving towards organised crime like East-European mafia alongside nation states engaging in espionage or sabotage. He also raised that it’s not just regular computers that are at risk, but mobile devices (smartphones and tablets), point-of-sale equipment like EFTPOS terminals and other dedicated-purpose computing devices that are also at risk. He emphasised issues like keeping regular and other computer systems up to date with the latest patches for the operating environment and the application software.

This encompassed the availability of a cloud-driven email and Website verification system that implements a proxy-server setup. This is designed to cater for the real world of business computing where computer equipment is likely to be taken and used out of the office and used with the home network or public networks like hotel or café hotspots. It stays away from the classic site-based corporate firewall and VPN arrangement to provide controlled Internet access for roaming computers. It also was exposing real Internet-usage needs like operating a company’s Social-Web presence, personal Internet services like Internet banking or home monitoring so as to cater for the ever-increasing workday, and the like. Yet this can still allow for an organisation to have control over the resources to prevent cyberslacking or viewing of inappropriate material.

Another technique that I observed is the ability to facilitate two-factor authentication for business resources or customer-facing Websites. This is where the username and password are further protected by something else in the similar way that your bank account is protected at the ATM using your card and your PIN. It was initially achieved through the use of hardware tokens – those key fobs or card-like devices that showed a random number on their display and you had to enter them in your VPN login; or a smart card or SIM that required the use of a hardware reader. Instead Symantec developed a software token that works with most desktop or mobile operating systems and generates this random code. It even exploits integrated hardware security setups in order to make this more robust such as what is part of the Intel Ivy Bridge chipset in second-generation Ultrabooks.

Advanced machine-learning has also played a stronger part in two more secure-computing solutions. For example, there is a risk assessment setup being made available where an environment to fulfill a connection or transaction can be assessed against what is normal for a users’s operating environment and practices. It is similar to the fraud-detection mechanisms that most payment-card companies are implementing where they could detect and alert customers to abnormal transactions that are about to occur, like ANZ Falcon. This can trigger verification requirements for the connection or transaction like the requirement to enter a one-time-password from a software token or an out-of-band voice or SMS confirmation sequence.

The other area where advanced machine-learning plays a role in secure computing is data loss prevention. As we hear of information being leaked out to the press or, at worst, laptops, mobile computing devices and removable storage full of confidential information disappearing and falling in to wrong hands, this field of information security is becoming more important across the board. Here, they used the ability to “fingerprint” confidential data like payment card information and apply handling rules to this information. This includes implementation of on-the-fly encryptions for the data, establishment of secure-access Web portals, and sandboxing of the data. The rules can be applied at different levels and affect the different ways the data is transferred between computers such as shared folders, public-hosted storage services (Dropbox, Evernote, GMail, etc), email (both client-based and Webmail) and removable media (USB memory keys, optical disks). The demonstration focused more on the payment-card numbers but I raised questions regarding information like customer/patient/guest lists or similar reports and this system supports the ability to create the necessary fingerprint of the information to the requirements desired.  

Cloud-focused computing support

The abovementioned secure-computing application makes use of the cloud-computing technology which relies on many of the data centres scattered around the world.

But the Norton 360 online backup solution that is typically packaged with some newer laptops is the basis for cloud-driven data backup. This could support endpoint backup as well as backup for servers, virtual machines and the like.

Mobile computing and BYOD

Symantec have approached the mobile computing and BYOD issues in two different paths. They have catered for the fully-managed devices which may appeal to businesses running fleets of devices that they own or using tablets as interactive customer displays. But they allowed for “object-specific” management where particular objects (apps, files, etc) can be managed or run to particular policies.

It includes the ability to provide a corporate app store with the ability to provide in-house apps, Web links or commercial apps so users know what to “pick up” on their devices. These apps are then set up to run to the policies that affect how that user runs them, including control of data transfer. This setup may also please the big businesses who provide those services that small businesses often provide as an agent or reseller, such as Interflora. Here, they could run the business-specific app store with the line-of-business apps like a flower-delivery-list app that runs on a smartphone. There is the ability to remotely vary and revoke permissions concerning the apps, which could come in handy when the device’s owner walks out of the organisation.

Conclusion

What this conference shows at least is the direction that business computing is taking and was also a chance to see core trends that were affecting this class of computing whether you are at the “big end of town” or not.

Could an upgrade to Windows 8 yield a performance boost to your computer

Article

Installing Windows 8 on your old PC could turn it into Greased Lightning | ZDNet

My Comments

Some of us might think about upgrading an existing computer system to Windows 8 when this operating system is released rather than staying with Windows 7. Most commonly, there would be a lot of doubt about this process but, in some cases, it could improve the computer’s performance.

A Windows computer that was built up in the last four years i.e. since Windows Vista was launched could easily benefit from an upgrade to Windows 8. This is due to the reworked code that is written to work best with the recent generation of hardware.

The speed increase also comes due to natively-integrated desktop security software as well as Internet Explorer 10 and integrated cloud-computing support.

But I would recommend that the system which is being upgraded has current expectations for RAM and secondary-storage capacity. This would be 4Gb RAM and 128Gb of secondary storage as a bare minimum. If the machine uses a hard disk rather than solid-state storage, I would expect it to have at least 320Gb. As I subsequently mention, the operating system is at a price point that may allow you to budget in a hardware upgrade to these expectations.

Users can upgrade their Windows computers to the new operating system at a cost-effective price due mainly to the “electronic hard-copy” distribution that Microsoft is using i.e. to buy and download the operating system online rather than having to buy a packaged copy with an optical disk. This is in a similar way to how Apple are distributing the MacOS X Lion and Mountain Lion operating-system upgrades. It also comes with some attractive licensing terms including the ability for those of us who are building our own systems to legitimately purchase a “system builder” package.

It would be OK to go ahead with the upgrade if you can handle the changes to how the operating environment works, such as the new touch-focused Start “dashboard” and having to “descend further” to get to the standard operating interface. But I would recommend that those of us who aren’t computer-competent should stay with Windows 7 unless they are buying a new computer that comes with Windows 8.

What could the OUYA Android games console be about

Article

An update on OUYA’s exciting Android-based console project: Success! | Hello Android

From the horse’s mouth

OUYA Web site

Kickstarter Web site

My Comments

From my observations, Android has been known to offer an open-frame computing platform for the smartphone and tablet. This has included access to independent content services as well as access to third-party browsers, independent content-transfer paths, and standards-based setup.

Now the Kickstarter project has asssisted the OUYA Android-based gaming platform which has been called as an effort to “open up” the last “closed” gaming environment i.e. the television. This environment has been effectively controlled by Microsoft, Sony and Nintendo through the sale of loss-leading consoles and developers finding it hard to cotton on to one of these console platforms without having to pony up large sums of money or satisfy onerous requirements.

The OUYA gaming platform could be seen as an effort to take Android’s values of openness to this class of device, especially by allowing independent games authors and distributors to have access to a large-screen console gaming platform. One of the main requirements is to provide free-to-play parts for a game title like what has successfully happened with games for regular computers, mobile devices and Web-driven online / social play. This is where games were available with demo levels or with optional subscriptions, microcurrency trading or paid add-on content.

Other companies have stood behind OUYA as an IPTV set-top box platform with TuneIn Radio (an Internet-radio directory for mobile phones) and VeVo (an online music-video service with access to most of the 1980s-era classics) giving support for this platform.

The proof-of-concept console uses the latest technology options like a Tegra3 ARM processor, 1Gb RAM / 8Gb secondary flash storage, 802.11g/n Wi-Fi and Ethernet networking as well as Bluetooth 4.0 Smart-Ready wireless peripheral interface. The controllers have analogue joysticks, a D-pad, a trackpad and link via this Bluetooth interface. They are also a lightweight statement of industrial design.

But I would like to see some support for additional local storage such as the ability to work with a USB hard disk or a NAS for local games storage. This could allow one to “draw down”extras for a game that they are playing

What is possible for the OUYA gaming platform

Hardware development and integration

But what I would like to see out of this is that the OUYA platform is available as an “open-source” integration platform. This could mean that someone who was to build a smart-TV, an IPTV set-top box or a PVR could integrate the OUYA platform in to their product in the same vein as what has successfully happened with the Android platform. For example, Philips or B&O could design a smart TV that uses the OUYA platform for gaming or a French ISP like Free, SFR or Bougyes Télécom offering a “triple-play” service could have the OUYA platform in an iteration of their “décodeur” that they supply to their customers.

Similarly the specification that was called out in the proof-of-concept can be varied to provide different levels of functionality like different storage and memory allowances or different hardware connections.

Software development and distribution

For software development, the OUYA platform can be seen as an open platform for mainstream and independent games studios to take large-screen console gaming further without having to risk big sums of money.

Examples of this could include the development and distribution of values-based games titles which respect desired values like less emphasis on sex or violence; as well as allowing countries that haven’t built up a strong electronic-games presence, like Europe, to build this presence up. There is also the ability to explore different games types that you may not have had a chance to explore on the big screen.

The OUYA platform could satisfy and extend vertical markets like venue-specific gaming / entertainment systems such as airline or hotel entertainment subsystems or arcade gaming; and could work well for education and therapy applications due to this open-frame platform.

Conclusion

What needs to happen is that there be greater industry and consumer awareness about the OUYA open-source large-screen gaming platform so that this platform is placed on the same level as the three established platforms. This this could open up a path to an open-frame computing platform success that the Android platform has benefited from.

The Apple Macintosh platform–now the target for malware

Introduction

In the late 1980s when the scourge of computer viruses hitting popular home and small-business computing platforms was real, this issue was exposed across all of the platforms that were in use during that year. This encompassed Apple’s two desktop platforms i.e. the Apple II and the Macintosh; along with the Commodore Amiga, the Atari ST and, of course the MS-DOS-driven “IBM” platform. Of course, the computer magazines ran articles about this threat and how to protect against it and disinfect your computing environment from these software pests.

But through the 1990s, the Windows / DOS systems were the main malware target, especially the Windows 98 and XP systems that ran Internet Explorer due to their popularity. The other platforms weren’t targeted that much due to their lesser popularity in the field and the computer press didn’t touch on that issue much. It was also because some of these platforms like the Amiga and Atari ST weren’t being supported any more by their manufacturers.

But lately there has become a trend for people to hop from the Windows platform to the Macintosh platform due to reduced targeting by malware authors and the perceived hardening that Apple has done to this platform. This has been recently augmented by the popularity of the iOS mobile-computing devices i.e. the iPhone, iPod Touch and iPad as well as elegant computing devices available to this platform. All of these factors has led to an increased popularity of Apple Macintosh computers in the feild and they have become a target for malware authors.

But most Macintosh users run their computers with the Apple-authored Safari Web browser and are likely to implement Apple iWork or Microsoft Office productivity software. They also run these computers without any desktop-security or system-maintenance tools because they perceive that Apple has made the task of keeping these computers in ideal condition easier than with the Windows platform.

What can Macintosh users do

Macintosh users can harden their computers against malware by installing and keeping up-to-date a desktop security suite. A free example of this is the Avast program that has been recently ported to the Macintosh platform and another paid-for premium example is the Kaspersky desktop-security suite. These programs are, along with a system-maintenance suite like Norton Utilities, a must-have so you can keep these computers working in an ideal condition.

Another practice that I always encourage is to keep all the software on your Macintosh computer lock-step with the latest updates. This can also help with dealing with any bugs or stability issues that may affect how the software runs on your computer. Here, you may want to enable a fully-automatic update routine for security and other important updates or a semi-automatic routine where the Macintosh checks for these updates and draws your attention to any newly-available updates, that you then deploy.

It is also worth disabling Adobe Flash Player, Java and similar “all-platform runtime” environments if you don’t need to run them. There are many articles on the Web about this in response to the Flashback Trojan Horse. Otherwise make sure that the runtime environments are kept updated. Similarly, you may want to change your default Web browser to a purely-open-source browsers like Firefox or Chrome, which is more likely to be kept up-to-date against known bugs and weaknesses. This was also made easier with new-build installations of MacOS X Lion i.e. when you had a new Macintosh with this operating system “out of the box”. Prior operating systems had the Java runtime installed by default and this survived any operating-system upgrade.

What Apple needs to do

Apple needs to come down from its silver cloud and see the realities of what is involved with keeping a computer in good order. For example, they need to provide desktop-security and system-tuning tools so that users can keep their Macintosh computers in tip-top condition and free from malware. They also need to transparently and immediately implement all updates and upgrades that Oracle releases for the Java environment in to their distribution or allow Oracle to distribute the Java environment  for the Macintosh platform.

As well, they need to take a leaf out of Microsoft’s book by implenenting a “default-standard-user” setup that has the user operating as a “desktop-user” privilege level by default. Then the user is asked if they want to go to an “administrator” privilege-level when they perform a task that requires this level and only for the duration of that task. This is important with home and small-business computer setups where there is typically only one fully-privileged user created for that system.

Conclusion

What the recent “Flashback” Trojan Horse has done is to bring the Apple Macintosh platform to a real level where issues concerning desktop security and system maintenance are as important for it as they are for other platforms.

A suggestion to make Black Friday the day to update the software on your parents’ computer

Article

Forget Shopping, Friday Is Update Your Parents’ Browser Day! – Alexis Madrigal – Technology – The Atlantic

My comments

You are celebrating Thanksgiving at your parents house but you notice that the old desktop computer that is their computer (and ending up as the family computer) is running Windows XP and Internet Explorer 6. But they see it as their “comfort zone” even though newer versions of Windows and Internet Explorer have been released.

This newspaper article has been suggesting that you update your parents’ computer with the latest version of the Web browser they are using for their operating system. This is because most of the Web sites are being re-engineered to work with newer Web browsers rather than the likes of Internet Explorer 6.

It could be done as part of keeping the computer in good order by doing other software and driver updates. You may even think of updating their computer to Windows 7 if it is running relatively-new hardware and use this package as a Christmas gift idea.

But the main issue with this kind of software update is that you may need to spend a lot of time teaching them the ropes of the new software with the new user interface elements. This may involve long telephone calls or regular house visits to walk them through parts of the user interface that they may find very difficult, as I have experienced with teaching people different computer skills.

Don’t forget that the Browser Choice Screen is your one-stop Web browser port-of-call

Previous Coverage – HomeNetworking01.info

Understanding The Browser Choice Screen (EN, FR)

Web site

Browser Choice Screen – http://browserchoice.eu

My Comments

Previously, I have covered the Browser Choice Screen, which was par of Microsoft’s anti-trust settlement with the European Commission concerning Internet Explorer. This was to be for consumer and small-business Windows setups in the European Union where people were to be offered a choice of Web browser for their operating environment.

But I still see this menu Web page as a “one-stop” port-of-call for people anywhere in the world who want to install new Web browsers or repair a damaged Web-browser installation. This resource came in handy when I was repairing a houseguest’s computer that was damaged by a “system-repair” Trojan Horse. Here, I could know where to go to collect the installation files for the Firefox Web browser that I was to repair so I can restore their Web environment.

If you are creating a system-repair toolkit on a USB memory key, you may visit this resource to download installation packages for the Web browsers to that memory key. Or you can create a shortcut file to this site and store it on the memory key b

ARM-based microarchitecture — now a game-changer for general-purpose computing

Article:

ARM The Next Big Thing In Personal Computing | eHomeUpgrade

My comments

I have previously mentioned about NVIDIA developing an ARM-based CPU/GPU chipset and have noticed that this class of RISC chipset is about to resurface in the desktop and laptop computer scene.

What is ARM and how it came about

Initially, Acorn, a British computer company well known for the BBC Model B computer which was used as part of the BBC’s computer-education program in the UK, had pushed on with a RISC processor-based computer in the late 1980s. This became a disaster due to the dominance of the IBM-PC and Apple Macintosh computer platforms as general-purpose computing platforms; even though Acorn were trying to push the computer as a multimedia computer for the classroom. This is although the Apple Macintosh and the Commodore Amiga, which were the multimedia computer platforms of that time, were based on Motorola RISC processors.

Luckily they didn’t give up on the RISC microprocessor and had this class of processor pushed into dedicated-purpose computer setups like set-top boxes, games consoles, mobile phones and PDAs. This chipset and class of microarchitecture became known as the ARM (Acorn RISC Microprocessor) chipset.

The benefit of these RISC (Reduced Instruction Set Computing) class of microarchitecture was to achieve an efficient instruction set that suited the task-intensive requirements that graphics-rich multimedia computing offered; compared to the CISC (Complex Instruction Set Computing) microarchitecture that was practised primarily with Intel 80×86-based chipsets.

There was reduced interest in the RISC chipset due to Motorola pulling out of the processor game since the mid 2000s when they ceased manufacturing the PowerPC processors. Here, Apple had to build the Macintosh platform for the Intel Architecture because this was offering RISC performance at a cheaper cost to Apple; and started selling Intel-based Macintosh computers.

How is this coming about

An increasing number of processor makers who have made ARM-based microprocessors have pushed for these processors to return to general-purpose computing as a way of achieving power-efficient highly-capable computer systems.

This has come along with Microsoft offering a Windows build for the ARM microarchitecture as well as for the Intel microarchitecture. Similarly, Apple bought out a chipset designer when developed ARM-based chipsets.

What will this mean for software development

There will be a requirement for software to be built for the ARM microarchitecture as well as for the Intel microarchitecture because these work on totally different instruction sets. This may be easier for Apple and Macintosh software developers because when the Intel-based Macintosh computers came along, they had to work out a way of packaging software for the PowerPC and the Intel processor families. Apple marketed these software builds as being “Universal” software builds because of the need to suit the two main processor types.

Windows developers will be needing to head down this same path, especially if they work with orthodox code where they fully compile the programs to machine code themselves. This may not be as limiting for people who work with managed code like the Microsoft .NET platform because the runtime packages could just be prepared for the instruction set that the host computer uses.

Of course, Java programmers won’t need to face this challenge due to the language being designed around a “build once run anywhere” scheme with “virtual machines” that work between the computer and the compiled Java code.

For the consumer

This may require that people who run desktop or laptop computers that use ARM processors will need to look for packaged software or downloadable software that is distributed as an ARM build rather than for Intel processors. This may be made easier through the use of “universal” packages that are part of the software distribution requirement.

It may not worry people who run Java or similar programs because Oracle and others who stand behind these programming environments will be needing to port the runtime environments to these ARM systems.

Conclusion

This has certainly shown that the technology behind the chipsets that powered the computing environments that were considered more exciting through the late 1980s are now relevant in today’s computing life. These will even provide a competitive development field for the next generation of computer systems.

Next Windows to have ARM build as well as Intel build. Apple,used to delivering MacOS X for Motorola PowerPC RISC as well as Intel CPUs, to implement Apple ARM processors on Macintosh laptops.

People-tagging of photos–a valuable aid for dementia sufferers

Facebook started it. Windows Live Photo Gallery has implemented it since the 2010 version and made it easier with the 2011 version.

What is people-tagging

The feature I am talking about here is the ability to attach a metadata tag that identifies a particular person that appear in a digital image. These implementations typically have the tag applied to a specific area of the photo, usually defining the face or head of the person concerned. It will also become available in current or up-and-coming versions of other image-management programs, photo-sharing services, DLNA media servers and the like.

In the case of DLNA media servers, one of these programs could scan an image library and make a UPnP AV content-directory “tree” based on the people featured in one’s photo library.

Initially the concept, especially the Facebook implementation, was treated with fear and scorn because of privacy invasion. This is because this implementation allows the metadata to be related to particular Facebook Friends and also allows the photo to be commented on by other Facebook Friends. Now the Windows Live Photo Gallery application attaches this metadata in a standardised XML form to the JPEG file like it does with the description tags and geotags. There is the ability to make a copy of this file without the metadata for use in posting to Internet services.

A relevant implementation idea

One key benefit that I would see with this data when implemented with electronic picture frames, HDTVs and similar devices is the ability to overlay the tags over the picture when it is shown. This could be achieved by the user pressing a “display” or similar button on the device or its remote control. Devices with touchscreens, stylus-operated tablet screens or other pointer-driven “absolute” navigation setups could support a function that shows a “people tag” as you touch areas of the image.

Benefit to Alzheimers sufferers

Here, this feature could help people who suffer from Alzheimer’s or other dementia-related illnesses by helping them remember whom their family members or friends are. If the user is using an image-management program or DLNA media-server setup capable of using these tags, they can call up a collection of images of the person they think of and have those images appearing on the screen. If the device has a communications-terminal function like a telephone, one of the images can be used as an index image to remember the correspondent by. This function could be extended by the use of an automatically-updated index image or a screenshow that shows “key” images of the person.

Improving on the idea

To make this work, there needs to be an industry standard that defines how the people-tag metadata is stored on the JPEG file. As well, the standard has to support functions like one or more separate “nickname” fields for each of the people that can be displayed as an option.  This is because a person may be known to one or more other people via a nickname or relative-shortcut name (Mummy, Daddy, Nonna, etc).

Another issue is to encourage users to establish consistency whenever they tag up a collection of images. This could be achieved through “batch-tagging” and / or improved facial recognition in image-management tools. This may be an issue if two or more people are tagging images from their own collections to serve a third collection and they know the people via different names.

Conclusion

Once we cut through the hysteria surrounding people-tagging with digital images and focus on using it as part of desktop image-management systems rather than social networks, we can then see it as a tool for helping people remember whom their loved ones are.