Category: Operating Systems

Consumer Electronics Show 2011–Part 3

Now we come to the issue of network-infrastructure equipment that will need to support the increasing demands placed on the home network by the previously-mentioned smartphones, tablet computers and Internet-enabled TVs.

Network Infrastructure

Network Connectivity

Some newer chipsets have appeared which will increase network bandwidth for the 802.11n Wi-Fi segment and the HomePlug AV segment. The current implementations may use manufacturer-specific implementations which won’t bode well with the standards.

The first new “call” is the 450Mbps 802.11n WPA2 WPS Wi-Fi segment which is being provided by most network makes for their midrange routers and access points. Access points and routers that work with this specification use three 802.11n radio streams to maintain the high throughput. The full bandwidth may be achieved if the client device is equipped with an 802.11n wireless network adaptor that supports the three streams but your existing devices may benefit due to reduced contention for the wireless bandwidth due to the access point / router offering three streams.

Most of the routers shown at the Consumer Electronics Show this year that support the 3-stream 450Mbps level for the 802.11n wireless network functionality also offered dual-band dual-radio operation to the same specification. Here, these devices could work on both the 2.4GHz band and the 5GHz band at this level of performance.

Some manufacturers were trying out the idea of a 60GHz high-bandwidth media network which may be based on a Wi-Fi (802.11 technology) or other proprietary scheme. This could lead to three-band multimedia routers and access points that use 2.4GHz and 5GHz for regular whole-home wireless networking and 60GHz for same-room wireless networking.

The second new “call” is the 500Mbps throughput being made available on high-end HomePlug AV devices. These powerline network devices may only achieve the high bandwidth on a segment consisting of the high-bandwidth devices that are based on the same chipset. Here, I would wait for the HomePlug AV2 standard to be fully ratified before you chase the 500Mbps bandwidth on your HomePlug segment. Of course, these devices can work with HomePlug AV segments.

The third new call is for midrange high-throughput routers to have Gigabit on the WAN (Internet) port as well as the LAN ports. This is more relevant nowadays as fibre-based next-generation broadband services are rolled out in most countries.

Everyone who exhibited network-infrastructure equipment offered at least one 450Mbps dual-band dual-radio router with Gigabit Ethernet on the WAN (Internet) connection as well as the wired-LAN connection. As well, most of these routers are equipped with circuitry that supports QoS when streaming media and some of them have a USB file-server function which can also provide media files to the DLNA Home Media Network.

Trendnet also offered an access point and a wireless client bridge that worked to this new level of 802.11n performance. They also demonstrated power-saving circuitry for Wi-Fi client devices which throttles back transmission power if the device is in the presence of a strong access point signal for their network. This was ostensibly to be “green” when it comes to AC-powered devices but would yield more real benefit for devices that have to run on battery power.

They also ran with the TPL-410AP which is a HomePlug AV Wireless-N multi-function access point. Another of those HomePlug access points that can “fill in the gap” on a wireless network or extend the Wi-Fi network out to the garage, barn or old caravan.

They also issued the TEW-656BRG 3G Mobile Wireless N Router, which is an 802.11n “MiFi router” that is powered by USB and works with most 3G / 4G modem sticks available in the USA. It is of a small design that allows it to be clipped on to a laptop’s lid or a small LCD monitor.

TP-Link had their 450Mbps three-stream dual-band dual-radio router with Gigabit on bot WAN and LAN Ethernet connections. As well they fielded a single-stream 150Mbps USB stick as the TL-WNT23N.

They also tried their hand with IP surveillance with the TL-SC4171G camera . This camera can do remote pan-tilt, and 10x digital zoom. It connects to the network via Ethernet or 802.11g Wi-Fi (not that much chop nowadays) and is equipped with an IR ring for night capture, as well as a microphone and speaker.

Netgear were more active with the 450Mbps three-stream routers with Gigabit LAN. Two of the models are broadband routers with Gigabit WAN, while one is an ADSL2 modem router which I think would serve the European and Australian markets more easily. The top-end model of the series has a USB file server function which works with the DLNA Home Media Network and also with Tivo “personal-TV devices”.

They also released the XAV5004 HomePlug AV switch which is the 500Mbps version of the their earlier “home-theatre” four-port HomePlug switch. Of course, they released the XAV2001 which is a compact “homeplug” adaptor which connects to the regular standards-based HomePlug AV segment.

They also have released the MBR1000 Mobile Broadband Router which works with 3G/4G wireless broadband or  Ethernet broadband. This unit is being provided “tuNrnkey” for Verizon’s new 4G LTE service.

Netgear have also fielded the VEVG3700 VDSL2/Gigabit Ethernet dual-WAN router with Gigabit Ethernet LAN, Cat-IQ DECT VoIP phone base station. This device, which is pitched at triple-play service providers also supports DLNA server functionality. As well, they also had a DECT VoIP kit available for these providers

As well, Netgear have tried their footsteps in to IP-surveillance for home and small business with a camera and an Android-driven screen for this purpose.

D-Link’s network hardware range include the three-stream 450Mbps routers with Gigabit WAN/LAN, a multifunction access point / repeater for the 802.11n network as well as a new DLNA-enabled network-attached storage range

As far as the MoCA TV-coaxial-cable network is concerned, Channel Master is the only company to release any network hardware for this “no-new-wires” network. It is in the form of a MoCA-Ethernet 4-port switch for the home theatre.

“Mi-Fi” wireless-broadband routers

Every one of the US cellular-telecommunications carriers are catching on to the 4G bandwagon not just with the smartphones and tablets but with the wireless-broadband routers.

Sprint have a unit for their WiMAX service while Verizon are fielding a Samsung LTE “Mi-Fi” as well as the aforementioned Netgear MBR1000 router.

Computer hardware and software

Monitors

Some of the companies who manufacture monitors are looking at the idea of “Internet-connected” monitors which have a basic Web browser in them so you don’t have to fire up a computer to view the Web.

CPU/GPU combo chips

These new processor chips combine a CPU which is a computer’s “brain” as well as the graphics processor which “draws” the user interface on to the screen. AMD and Intel were premiering the “Accelerated Processor Units” and the Core “Sandy Bridge” prcessors respectively at the CES this year.

Intel were trumpeting the fact that this technology could make it harder to pirate movie content but this is more about mainstream computing and small-form-factor hardware being behind this space and power saving processor hardware.

Sony had lodged a commitment to AMD to use the Zacate “Accelerated Processor Unit” in some of their VAIO laptops.

Other hardware

AMD haven’t forgotten the “performance computing” segment when it comes to processor chips and released the quad-core and 6-core “Phenom” desktop and gaming-rig CPUs.

Seagate have also made the “GoFlex” removable / dockable hard disks a standard by building alliances with third-parties to make hardware that works to this standard. Could this be another “VHS-style” alliance for dockable hard disks?

Microsoft also used this show to premiere their Touch Mouse which uses that same touch operation method as Apple’s Magic Mouse. Do I see an attempt for them to “snap at” Apple when it comes to “cool hardware” as well as software?

The Microsoft Platform

There has been some activity with the Microsoft Windows platforms now that set-top boxes and tablet computers are becoming the “order of the day”

One direction Microsoft is taking is to port the Windows Platform, which was primarily written for Intel-Architecture processors, to the Acorn ARM-architecture processors. The reason that this port is taking place is due to these energy-efficient RISC processors being commonly used in battery-driven applications like tablet computers. They are also popular with other dedicated multimedia devices like set-top boxes and TV applications.

As well, Microsoft will be working on a lightweight Windows build for TV applications like set-top boxes. This is although they have previously written Windows-CE builds for this class of device.

Microsoft also want to make a variant of the Windows Phone 7 for tablet computers and are starting work on the Windows 8 project.

Similarly, Somsung has demonstrated the second incarnation of the Microsoft Surface platform This one comes in a slimmer table-based form rather than a unit that is as thick as the 1980s-style “cocktail-table” arcade game machine.

Conclusion

The Consumer Electronics Show 2011 has certainly put the connected home on the map. This is due to affordable smartphones and tablet computers becoming more ubiquitous and Internet-provided video services becoming an increasing part of American home life.

It will be interesting to see what will happen for the other “pillar” of the consumer-electronics trade fair cycle – the Internationaler Funkaustellung; and how more prevalent the Internet TV, smartphone and tablet computer lifestyle will be in Europe and Asia.

The Mac App Store–what could this mean for the Apple Macintosh platform?

Mac App Store launching in January sans Game Center and in-app purchases? | Engadget

My Comments

At the moment, Apple Macintosh users can buy software in a packaged form from any store that sells software for this platform. As well, they can download software from various Websites, including the developers’ own Websites and run this software on their computers.

Now Apple is introducing the Mac App Store as an extension of the iTunes App Store that is the only way to get extra software for any iOS device (iPhone, iPod Touch or iPad) for the Macintosh desktop. The main question I have about this is whether this App Store will exist simply as another storefront for MacOS X software where such software can be purchased with the iTunes gift cards or a regular credit card or as a move by Apple to make this storefront the only way for MacOS X users to add software to their computers?

There has been controversy about the App Store in relation to the iOS platform over the last few years because it allowed Apple to have greater control over the software that could run on that platform. Situations that came about included outlawing Adobe Flash on the iOS platform and prohibiting the supply of software that Steve Jobs didn’t see fit like Wi-Fi site-survey tools for example. I had talked with some friends of mine who were regular Mac users and they feared that if Apple set up the App Store on the Macintosh platform, it could become the start of a situation where you can’t load applications on a Mac unless they came through the App Store.

What I would like to see of the Mac App Store is that it exists as another storefront and “download city” for Macintosh-platform software and that MacOS developers can maintain their own sites and distribution channels for such software. It should then keep the Macintosh platform a flexible desktop-computing platform with the expectations of this class of platform rather than a desktop version of the Apple iOS embedded-computing platform.

Another major change for the Intel-based PC platform will shorten the boot-up cycle

News articles

Getting a Windows PC to boot in under 10 seconds | Nanotech – The Circuits Blog (CNET News)

BBC News – Change to ‘Bios’ will make for PCs that boot in seconds

My comments

The PC BIOS legacy

The PC BIOS which was the functional bridge between the time you turn a personal computer on and when the operating system can be booted was defined in 1979 when personal computers of reasonable sophistication came on the scene. At that time the best peripheral mix for a personal computer was a “green-screen” text display,  two to four floppy disk drives, a dot-matrix printer and a keyboard. Rudimentary computers at that time used a cassette recorder rather than the floppy-disk drives as their secondary storage.

Through the 1980s, there was Improved BIOS support for integrated colour graphics chipsets and the ability to address hard disks. In the 1990s, there were some newer changes such as support for networks, mice, higher graphics and alternate storage types but the BIOS wasn’t improved for these newer needs. In some cases, the computer had to have extra “sidecar” ROM chips installed on VGA cards or network cards to permit support for VGA graphics or booting from the network. Similarly, interface cards like SCSI cards or add-on IDE cards couldn’t support “boot disks” unless they had specific “sidecar” ROM chips to tell the BIOS that there were “boot disks” on these cards.

These BIOS setups were only able to boot to one operating environment or, in some cases, could boot to an alternative operating environment such as a BASIC interpreter that used a cassette recorder as secondary storage. If a user wanted to work with a choice of operating environments, the computer had to boot to a multi-choice “bootloader” program which was a miniature operating system in itself and presented a menu of operating environments to boot into. This was extended to lightweight Web browsers, email clients and media players that are used in some of the newer laptops for “there-and-then” computing tasks.

The needs of a current computer, with its newer peripheral types and connection methods, were too demanding on this old code and typically required that the computer take a significant amount of time from switch-on to when the operating system could start. In some cases, there were reliability problems as the BIOS had to get used to existing peripheral types being connected to newer connection methods, such as use of Bluetooth wireless keyboards or keyboards that connect via the USB bus.

The Universal Extensible Firmware Interface improvement

This is a new improvement that will replace the BIOS as the bootstrap software that runs just after you turn on the computer in order to start the operating system. The way this aspect of a computer’s operation is designed has been radically improved with the software being programmed in C rather than machine language.

Optimised for today’s computers rather than yesterday’s computers

All of the computer’s peripherals are identified by function rather than by where they are connected. This will allow for console devices such as the keyboard and the mouse to work properly if they are connected via a link like the USB bus or wireless connectivity. It also allows for different scenarios like “headless” boxes which are managed by a Web front, Remote Desktop Protocol session or similar network-driven remote-management setup. That ability has appealed to businesses who have large racks of servers in a “data room” or wiring closet and the IT staff want to manage these servers from their desk or their home network.

Another, yet more obvious benefit is for computer devices to have a quicker boot time because the new functions that UEFI allows for and that the UEFI code is optimised for today’s computer device rather than the 1979-81-era computer devices. It is also designed to work with future connection methods and peripheral types which means that there won’t be a need for “sidecar” BIOS or bootstrap chips on interface cards.

Other operational advantages

There is support in the UEFI standard for the bootstrap firmware to provide a multi-boot setup for systems that have multiple operating environments thus avoiding the need to provide a “bootloader” menu program on the boot disk to allow the user to select the operating environment. It will also yield the same improvements for those computers that allow the user to boot to a lightweight task-specific operating environment.

When will this be available

This technology has been implemented in some newer laptops and a lot of business-class servers but from 2011 onwards, it will become available in most desktop and laptop computers that appeal to home users and small-business operators. People who have their computers built by an independent reseller or build their own PCs will be likely to have this function integrated in motherboards released from this model year onwards.

Special Report – Windows 95 now 15 years old and a major change to the PC computing platform

During mid-1995, the Intel-based “IBM-PC” desktop computing platform had been given a major improvement with the arrival of a new operating system from Microsoft. This operating system, initially known as “Chicago” and was to be known as “Windows 4” and “MS-DOS 7” but became known as Windows 95 had yielded many improvements to this platform that it was made increasingly legitimate as an “all-round” general-purpose computing platform that was ready for the Internet.

This operating system was launched with a huge campaign which revolved around the new “Start” button on the desktop and this was enforced with the use of the Rolling Stones smash-hit song “Start Me Up”. The visual element that was also used was the clouds in the sky symbolising a new operating environment for your computer.

How did Windows 95 improve the Intel-based “IBM PC platform”

Computer-Management Improvements

Integration of Windows graphical user interface with MS-DOS operating system

Previously, a computer that worked on the “IBM PC platform” required the use of Microsoft’s MS-DOS operating system or a similar operating system like Digital Research’s DR-DOS as its base operating system. These operating systems didn’t come with a graphical shell unless you paid extra for one and ran the shell as a distinct program.

This typically required users either to run a third-party menu program or graphical user-interface “shell” like Automenu, Microsoft Windows or one that was supplied with network software like Novell; or, if they had MS-DOS 4 or 5, start a DOSSHELL graphical user interface. IBM typically pushed their OS/2 graphical shell as one that was suitable for any of their PS/2 series computers.

Now, Windows 95 integrated the graphical user interface with the MS-DOS operating system and had this running as a default setup. It had led to avoiding the need to remember to run particular programs to use a graphical-user interface.

A lot less to run to add functionality to the computer.

Previously, if you wanted to run sound, advanced graphics or other multimedia, use peripherals like a mouse or a CD-ROM drive or use communications or computer networks, you had to make sure that you ran particular drivers or memory-resident programs. This typically required you to work with the CONFIG.SYS or AUTOEXEC.BAT files to make sure these programs start.

If you wanted to increase memory for particular programs, you had to know how to stop a particular memory-resident program to free up the memory space. In the case of communications, you had to use communications programs which were effectively “terminal emulators” to work with bulletin boards and these programs were the only ones that could control the modem. Similarly, if you ran a network, you would need to run networking software to allow the computer to benefit from the network. Some of these situations even required the location to have a resident “geek” called a system administrator to set up these computers. Even the Internet on a Windows machine behind a dialup modem needed the user to run programs like Trumpet Winsock to establish the connection.

This improvement alone allowed a small organisation to share files or printers between computers that are connected on a network with minimal configuration effort and has opened up the path towards the home network.

With Windows 95, most of these functions were simply handled by the operating system rather than by extra software that had to be started.  This had taken away all of the extra requirements that the user needed to think of to run a highly-capable computer and do what they wanted to do.

Ready for the Internet

1995 was the year that the Internet came to the mainstream. Cyber-cafes had sprung up around town and new businesses called “Internet Service Providers” came on the scene. It was considered the “in thing” to have an email address where you could receive Internet-based email and you also had to know how to surf the Web. The old order of bulletin boards and online services with their “controlled media” had fallen away for this new “uncontrolled media” order that the Internet offered.

Windows 95 was capable of working with the Internet “out of the box” whether through a network or a dial-up service. This was because the operating system had an integrated TCP/IP stack with support for PPP-based dial-up protocols. There was even a basic email client provided with the operating system.

User-interface improvements

The Start Menu

This was a new take on the previous DOSSHELL programs, Windows Program Manager and the third-party menu programs as being a place to find and start programs. Here, the user clicked on the Start button at the bottom left of the screen and found a tree of program names which would represent to software found on their system.

It had been considered easier for most users to start working on whatever they wanted to work on and has become a standard motif for all of the Microsoft operating environments since this operating system.

Windows Explorer and the object-driven view

The file-management functionality was handed over to Windows Explorer which provided for a new way of managing files and objects. It allowed for programmatic views like a “My Computer” view that provided for a simplified shell or an “Explorer” view with a directory tree in a pane as well as an object-driven file view.

This collection-viewing concept had extended to the Control Panel and other operating-system components that used collections as they were introduced in to the Windows platform.

Larger file names

Previously in MS-DOS, you were limited to an 8-character file name with a 3-character extension that was used for defining the file type. Now, since Windows 95, you could create a meaningful file name of up to 32 characters long which allowed you then to identify your files more easily. Thee was a special truncated 8-character version of the file name for use with older programs that didn’t support the new file-name convention.

It became more important as digital cameras became popular because people could name their photos in a way that reflects the content of the picture and also was important as file-based audio storage came on to the scene.

The Registry configuration-data store

Microsoft introduced the Registry configuration-data store as a way of avoiding the need to maintain multiple configuration files across the system. Here, this store allowed for a centralised point of reference for holding this data that the operating system and applications needed for configuration-reference information that had to be persistent across sessions.

Under-the-hood improvements

Integration with the 32-bit computing world

This operating system was built from the ground up to be a true 32-bit operating system that was tuned to work with the 32-bit processors that emerged since the Intel 80386DX processor. This would then allow software developers to compile their programs to run their best in a 32-bit computing environment.

This was in contrast to programs like Microsoft Word 6.0 which were compiled for Intel-architecture 32-bit processors but in a manner that was to be compatible with 16-bit processors of the same architecture. As well, most of the MS-DOS operating systems were also compiled for use with the 8-bit “PC/XT” environments and/or the 16-bit “PC/AT” environments. The operating-system limitation then didn’t allow these programs to work at their best even if run on a computer with a 32-bit processor.

This had allowed for a variety of optimised computing setups like true multitasking and multithreading that these newer processors could cater for.

It is like Windows 7 where the operating system has been tuned for a 64-bit computing world and optimised for the newer multicore processors that are part of the Intel-based processor architecture.

Readiness for newer computing designs

Windows 95 had also catered for newer computing design principles such as the “soft-off” principle that was part of portable laptop computers and was to be part of the up-and-coming ATX desktop-computer design standard.  This principle catered for “one-touch” power-off and modem-based / network-based power-on practices which allowed for improved system management for example.

The operating system also allowed for support of various forms of extensability through use of standards, class drivers and similar practices that avoid the need to overload Windows with drivers.

Conclusion

Windows 95 wasn’t just an “ugly duckling” of an operating system but a major turning point for the evolution of the Windows platform. Happy Birthday Windows 95!

Criminal legal action now being taken concerning “scareware”

 Articles

Scareware Indictments Put Cybercriminals on Notice – Microsoft On The Issues

Swede charged in US over ‘scareware’ scheme | The Local (Sweden’s News in English) – Sweden

US-Behörden klagen Scareware-Betrüger an | Der Standard (Austria – German language)

From the horse’s mouth

FBI Press release

My comments

What is scareware

Scareware is a form of malware that presents itself as desktop security software. Typically this software uses a lot of emphasis on “flashing-up” of user-interface dialogs that mimic known desktop security programs, whether as add-on programs or functions that are integral to the operating system. They also put up dialogs requiring you to “register” or “activate” the software in a similar manner to most respected programs. This usually leads you to Web sites that require you to enter your credit-card number to pay for the program.

In reality, they are simply another form of Trojan Horse that is in a similar manner to the easy-to-write “fake login screen” Trojans that computer hackers have created in order to capture an administrator’s high-privilege login credentials. Some of the scareware is even written to take over the computer user’s interactive session, usually with processes that start when the computer starts, so as to “ring-fence” the user from vital system-control utilities like Task Manager, Control Panel or command-line options. In some cases, they also stop any executable files from running unless it is one of a narrow list of approved executable files. They are also known to nobble regular desktop anti-malware programs so that they don’t interfere with their nefarious activities. This behaviour outlined here is from observations that I had made over the last few weeks when I was trying to get a teenager’s computer that was infested with “scareware” back to normal operation.

Who ends up with this scareware on their computer

Typically the kind of user who will end up with such software on their computer would be consumers and small-business operators who are computer-naive or computer-illiterate and are most likely to respond to banner ads hawking “free anti-virus software”. They may not know which free consumer-grade anti-virus programs exist for their computing environment. In a similar context, they may have found their computer is operating below par and they have often heard advice that their computer is infested with viruses.

What you should do to avoid scareware and how should you handle an infestation

The proper steps to take to avoid your computer being infested with scareware is to make sure you are using reputable desktop security software on your computer. If you are strapped for cash, you should consider using AVG, Avast, Avira or Microsoft Security Essentials which have the links in the links column on the right of your screen when reading this article on the site.

If you have a computer that is already infected with this menace, it is a good idea to use another computer, whether on your home network or at your workplace, to download a “process-kill” utility like rkill.com to a USB memory key or CD-R and run this on the infected computer immediately after you log in. It may alos be worth visiting the “Bleeping Computer” resource site for further information regarding removing that particular scareware threat that is affecting your computer. This is because I have had very good experience with this site as a resource when I handled a computer that was infested with scareware.

If you are at a large workplace with a system administrator, ask them to prepare a “rescue CD” with the utilities from the “bleeping-computer” Web site or provide a link or “safe-site” option on your work-home laptop to this site so you can use this computer as a “reference” unit for finding out how to remove scareware from a computer on your home network.

How the criminal law fits in to this equation

The criminal law is now being used to target the “scareware” epidemic through the use of charges centred around fraud or deception. Like other criminal cases involving the online world, the situation will touch on legal situations where the offenders are resident in one or more differing countries and the victims are in the same or different other countries at the time of the offence.

This case could raise questions concerning different standards of proof concerning trans-national criminal offences as well as the point of trial for any such offences. 

Conclusion

Once you know what the “scareware” menace is, you are able to know that criminal-law measures are being used to tackle it and that you can recognise these threats and handle an infestation.

Disclaimer regarding ongoing criminal cases

This article pertains to an ongoing criminal-law action that is likely to go to trial. Nothing in this article is written to infer guilt on the accused parties who are innocent until proven guilty beyond reasonable doubt in a court of law. All comments are based either on previously-published material or my personal observations relevant to the facts commonly known.

Heads-up: Google Chrome is now at version 5.0

Articles

Chrome 5.0 en version finale | Le Journal du Geek (France – French language)

Google veröffentlicht Chrome 5 für Windows, Mac OS und Linux | Der Standard (Austria – German language)

Google ships “fastest-ever” Chrome out of beta | The Tech Herald

Download link

http://www.google.com/chrome

My comments

Google have updated their Chrome browser to the next major version. It has been fine-tuned “under the hood” for speed in a similar way to what has happened with Windows 7 and MacOS X “Snow Leopard” and is intended to be faster than the prior versions.

There is also improvements in how it handles the new HTML5 language, which will make it ready for the Web’s new direction. Other improvements include “experience synchronisation” between different computers, a must have if you are upgrading computers constantly or operating two different computers like a desktop and a laptop.

At the moment, there isn’t a stable Adobe Flash plugin for this version but it will be provided as part of the browser’s update process.

This may appeal to you if you have jumped from Internet Explorer to Google Chrome, whether directly or through the Browser Choice screen in Europe.

The Browser Choice Screen – we are still not happy

 Les éditeurs de navigateurs se mobilisent contre Microsoft – DegroupNews.com (France – French language)

My comments on this situation

There is still some disquiet in the European Union regarding the Browser Choice Screen that Microsoft launched in that market on 1 March 2010 to satisfy the European Commission’s anti-trust issue concerning their delivery of Internet Explorer 8 as the standard browser for the Windows platform.

The main issue was that the only browsers that were immediately visible to the user were the “top 5” desktop browsers – Google Chrome, Mozilla Firefox, Apple Safari, Microsoft Internet Explorer and Opera. The user had to “pan” the menu rightwards to see the other browsers like Maxthon, GreenBrowser, K-Meleon and Flock. This had annoyed the developers of these alternative browsers, some of which were “super-browsers” built on either the Mozilla Firefox or Internet Explorer codebases and were endowed with extra features.

These browser developers want the European Commission to mandate an easily-identifiable visual cue as part of the Browser Choice Screen user-interface to indicate more browsers available. This is even though there is a scroll-bar of variable width under the browser list that can be dragged left and right to reveal the other browsers.

Personally, I would also look into the idea of an alternative user-interface layout in the form of a 6 x 2 grid for the browser-selection part rather than the current “ribbon” menu. This can cater for more browsers to be shown to  the user, but the downside would be that it requires more screen real-estate which limits its utility on smaller screens like netbooks. It may also make the user-interface more cluttered and intimidating.

It is certainly a situation that reminds me of many council planning-permission fights that I have read about in various local newspapers whenever one of the big American fast-food chains like KFC or McDonalds wants to set up shop in a neighbourhood. A very constant argument that I read of in these reports is that the fast-food chain’s logo and colour scheme stands out like a sore thumb against all the other small cafés that had existed previously in that area. The alternative browser developers like Maxthon see themselves as the small café who is put out of business by the “big boys” (Google Chrome, Mozilla Firefox, Internet Explorer & co) who are seen in a similar light to McDonalds, KFC & co.

Understanding the Browser-Choice Screen – Updated

News articles

Microsoft offers web browser choice to IE users | BBC Technology (UK)

Microsoft about to offer Windows users a browser choice screen | The Guardian Technology Blog (UK)

La concurrence entre navigateurs web relancée en Europe | DegroupNews (France – French language)

From the horse’s mouth

The Browser Choice Screen for Europe: What to Expect, When to Expect It | Microsoft On The Issues (Microsoft)

UPDATE: The Browser Choice Screen for Europe – Microsoft On The Issues (Microsoft)

European Union press release about the Browser Choice screen

Browser Choice Screen shortcut (available anywhere in the world)

http://browserchoice.eu

Advocacy site

OpenToChoice.org (Mozilla)

My comments and further information

If you run a version of Windows XP, Vista or 7 that you bought in Europe and your default browser is Internet Explorer 8, you may be required to complete a “browser-selection” ballot screen, known as the Browser Choice screen, to determine which browser your computer should run as its default browser. It may not happen if you ran another browser as a default browser, then came back to Internet Explorer 8. It also will happen to European migrants who had brought out their Windows computers with them.

You will have to work through a “wizard” which has an introduction screen then the list of browsers presented in a random order. Once you choose that browser, it will be determined as your default Web-browsing tool every time you go to a Web page. If the browser isn’t installed on your system, the software will be downloaded from the developer’s site and installed in to your system. browser_choice_1_clip_image002_136F9F12

If you run Windows 7, the Internet Explorer “e” logo will disappear from the Taskbar, but you can still find it in your Start Menu. Then, you will be able to reattach it to your Taskbar by right-clicking on the program in the Start Menu and selecting “Pin to Taskbar”.

The Browser Choice screen will subsequently become available as another method of changing default browsers, alongside the options available when you install, update or run a Web browser.

There are some issues you may run into if you move from Internet Explorer 8 to another browser. One is that you won’t have your RSS feeds held in the Common Feed List which works as part of Windows Vista and 7. This may affect the addition of new feeds to programs that make use of the Common Feed List as their RSS data store. Similarly, Windows 7 users won’t benefit from having the tabs viewable in Aero Peek’s multi-window preview. This issue may be resolved with versions of the alternative browsers being built to work tightly with the host operating system’s features, which can be achieved with the Windows application programming interface information being made available by Microsoft.

At the moment, there isn’t a program that adds installed browsers to the shortcut menu when you right-click on a Web link. Such a program would benefit Web developers and bloggers who want to test a page under different browsers or people who want to “spread the Web-viewing load” amongst different clients.

Author recommendations (in no particular order)

I recommend any of these browsers because users don’t have to relearn the user interface if they switch between any of them.

Mozilla Firefox

Internet Explorer

Opera

Safari

Microsoft Internet Explorer antitrust case resolved by European Union

 EU resolves Microsoft IE antitrust case | Microsoft – CNET News

From the horse’s mouth

European Union

Microsoft’s press release

My comments on this issue

Previously, there was talk of Microsoft having to supply European customers with “browser-delete” options for copies of Windows 7 operating system where they would have to explicitly download their browser of choice and wouldn’t be able to “get going” with Internet Explorer. Now, there is the requirement to provide a “browser-select” screen when you can install any of 12 alternative browsers and nominate one of the other browsers as the default browser. This will have the browsers organised in a random order so as not to favour Internet Explorer or a “browser-skin” with hooks to the Internet Explorer code.

One main improvement that I had liked about this is that you can deploy more than one browser from the “browser-select” screen, which will please Web-site developers who want to test their site in other browser environments. Similarly this will please users who are testing browsers for a proposed usage environment or replicating problems encountered with a particular browser.

It will be feasible for a computer supplier to “run with” a different default browser yet consumers can choose whichever browser suits them better. This would be more so with operations like Dell or the small independently-run High Street computer shops who build computers “to order” for individuals, rather than suppliers like HP/Compaq or Toshiba who build systems to particular packages to be sold through electronics chain stores.

The only issue is whether an individual or organisation can determine a particular browser as part of a Windows-based “standard operating environment” when they specify their computer equipment and not have to pass through the “browser-select” screen. Also, what will be the expectation for any proposed computer fleets and “standard operating environments”? Will the company who buys the computer equipment be able to determine which is the default browser for their environment or will they be required to allow individual staff members / end-users to choose which browser they are to work with? The reason I am raising this issue is because in some countries within the EEA like France, there is an organised-labour culture where the trade unions can exercise a lot of influence over what goes on in a workplace.

Another issue that may need to be raised is whether the European-specific “browser-choice” arrangements will be available outside of the European Economic Area. This may be of concern to independent system builders who may want to assure customers of browser choice as a differentiating factor or local, state or federal government departments who may want to be assured of this for computers supplied as part of their IT programs operating in their area or as part of a legislative requirement for their area. It may also be of benefit to PC users who want to load their computers with many browsers so as to, for example, test a Web site under many operating environments.

Application-distribution platforms for smartphones and other devices

At the moment, there are an increasing number of PDAs, smartphones and mobile Internet devices that can be given extra functionality by the user after they buy the device. This is typically achieved through the user loading on to their device applications that are developed by a large community of programmers. This practice will end up being extended to other consumer-electronics devices like printers, TVs, set-top boxes, and electronic picture frames as manufacturers use standard embedded-device platforms like Android, Symbian or Windows CE and common “embedded-application” processors for these devices. It will be extended further to “durable” products like cars, business appliances and building control and security equipment as these devices end up on these common platforms and manufacturers see this as a way of adding value “in the field” for this class of device.

From this, I have been observing the smartphone marketplace and am noticing a disturbing trend where platform vendors are setting up their own application-distribution platforms that usually manifest as “app stores” that run on either the PC-device synchronisation program or on the device’s own user-interface screen. These platforms typically require the software to be pre-approved by the platform vendor before it is made available and, in some cases like the Apple iPhone, you cannot obtain the software from any other source like the developer’s Web site, competing app store or physical medium. You may not even be able to search for applications using a Web page on your regular computer, rather you have to use a special application like iTunes or use the phone’s user-interface.

People who used phones based on the Windows Mobile or Symbian S60 / UIQ platform were able to install applications from either the developer’s Website or a third-party app store like Handango. They may have received the applications on a CD-ROM or similar media as the mobile extension for the software they are buying or as simply a mobile-software collection disc. Then they could download the installation package from these sites and upload it to their phone using the platform’s synchronisation application. In some cases, they could obtain the application through the carrier’s mobile portal and, perhaps, have the cost of the application (if applicable) charged against their mobile phone account. They can even visit the application Website from the phone’s user interface and download the application over the 3G or WiFi connection, installing it straight away on the phone.

The main issue that I have with application-distribution platforms controlled by the device platform vendor is that if you don’t have a competing software outlet, including the developer’s Web site, a hostile monopolistic situation can exist. As I have observed with the iPhone, there are situations where the platform vendor can arbitrarily deny approval for software applications or can make harsh conditions for the development and sale of these applications. In some cases, this could lead to limitations concerning application types like VoIP applications being denied access to the platform because they threaten the carrier partner’s revenue stream for example. In other cases, the developer may effectively receive “pennies” for the application rather than “pounds”.

What needs to happen with application-distribution platforms for smartphones and similar devices is to provide a competitive environment. This should be in the form of developers being able to host and sell their software from their Website rather than provide a link to the platform app store. As well, the platform should allow one or more competing app stores to exist on the scene. It also includes the carriers or service providers being able to run their own app stores, using their ability to extend their business relationships with their customers like charging for software against their customers’ operating accounts. For “on-phone” access, it can be facilitated in the form of uploadable “manifest files” that point to the app store’s catalogue Website.

As well, the only tests that an application should have to face are for device security, operational stability and user-privacy protection. The same tests should also include acceptance of industry-standard interfaces, file types and protocols rather than vendor-proprietary standards. If an application is about mature-age content, the purchasing regime should include industry-accepted age tests like purchase through credit card only for example.

Once this is achieved for application-distribution platforms, then you can achieve a “win-win” situation for extending smartphones, MIDs and similar devices