The BBC have used their world-famous radio-play craft to create an interactive audio drama that works hand-in-glove with the Amazon Alexa platform. The listeners interact with their Amazon Echo or Alexa-based smart speaker to direct how the story goes. Here it intermingled the radio-play expertise with those “Choose Your Own Adventure” storybooks or the text-based adventure computer games.
Now the German ARD group of public-service broadcasters have taken on a similar effort but have carried their effort on the back of the “Tatort” crime-drama series that is a mainstay of German-language TV content. The effort would be very similar to the early Police Quest series of crime-themed graphic adventure games that Sierra launched through the late 1980s and early 1990s; or the LA Noire video adventure game released in 2011 and set in late-1940s post-WWII Los Angeles.
They have taken this further by making it work on both the Alexa and Google Assistant platforms including their mobile-platform assistant apps as well as the smart speakers. In addition to this, ARD even provides a Web-based interactive audio adventure so you don’t have to use Amazon Alexa or Google Assistant for this kind of game.
What is peculiar about Tatort is that this German-language crime series has different investigation teams that are each based in different cities or districts within Germany, Austria or Switzerland who solve the cases within that area. Each episode that comes on in the German-speaking countries through their public-service broadcaster on Sunday night 8:15pm local time will appear with a case from a different city.
This interactive audio play, called Höllenfeuer in German or Hellfire in English, has been prepared primarily by Bayerischer Rundfunk (BR) with the help of WestDeutscher Rundfunk (WDR) and uses the Munich-based Tatort crime-investigation team. The player controls the character of Kommissarin Mavi Fuchs who works alongside Kriminalkommissar Kalli Hammermann to solve the case. You have to use voice commands to direct these protagonists in the interactive audio play which can be replayed if you are trying to “get the grip of it” further.
But what is happening is that some broadcasters are discovering the idea of mixing radio plays with interactive elements to provide the audio equivalent of classic adventure computer games. Then they are linking these products with voice-driven assistant platforms. The approach ARD have taken with Tatort is to use this new form of delivering audio content effectively to take their existing intellectual property, especially a tentpole TV show, further.
It can be seen as a way to take am existing content franchise further and implement it in a new form, especially an interactive audio play. This is more so as the smart speakers and other voice-driven-assistant devices become more popular.
A consistent problem associated with bringing broadband Internet to rural and remote places is the cost and time involved in bringing these services there. But there have been various efforts by public and private sector entities to implement satellite broadband to serve this need.
Most of these have distinct disadvantages such as the equipment and service being very costly and a lot of these services not offering great bandwidth and latency. Let’s not forget that the deployment of this technology isn’t all that scaleable.
The COVID-19 coronavirus plague has underscored how dependent we are on Internet connectivity for our business and social lives. The role of rural areas has even been underscored with these areas gaining increased appeal to live or do business within because of the pandemic. A recent Euroconsult report has stated that satellite broadband will gain its value over the next decade as a way to enable access to the Internet from remote areas.
The new low-earth-orbit satellites
… allowing more rural and remote areas to gain real broadband
But a new form of satellite broadband is being pushed out at the moment. This is based on low-earth-orbit satellite technology which uses a very large constellation of satellites that are closer to Earth than traditional satellite technology. This improves on latency and on bandwidth available to the end users.
Silicon Valley visionaries like Mark Zuckerberg and Elon Musk have been behind this technology in order to have Internet all over the world, even in the remotest areas thereof.
But Elon Musk has got this idea off the ground with Starlink which is a subsidiary of his SpaceX venture. Most of his constellation of Starlink satellites are in orbit now while he has more being manufactured and set up for launch. The service is in beta testing for the USA, UK, Ireland, New Zealand and Germany at the time of writing but more areas are expected to be covered soon. They have also started establishing their presence in Australia.
Elon Musk’s service isn’t just for rural and remote areas at the moment. He is seeking FCC type approval for equipment that is to be installed on vehicles, ships and aircraft and to be operated while the vehicle, vessel or aircraft is moving. This is to court the provision of Internet service aboard the likes of commercial jets, the merchant navy and long-distance land transport. Who knows when Musk will then have consumer equipment designed to facilitate ad-hoc use of Starlink from caravans, motorhomes or remote camping locations.
Another service being pushed out at the moment is the OneWeb service that is pushed out by a UK and Indian consortium. Let’s not forget that Amazon is working on their Project Kuiper low-earth-orbit satellite service but they want to make sure everything is perfect before a single satellite is launched.
The idea of having many satellites is being made feasible with reuseable rockets like the Falcon 9 SpaceX rocket, which effectively reduces the cost of launching many spacecraft.
What I see of the low-earth-orbit satellite constellations is that they are intended to be viable competition in the satellite-broadband Internet service space. This could allow the idea of cost-effective high-throughput low-latency broadband to be made available to rural and remote areas or long-distance transport applications.
Intel is intending to increase its semiconductor manufacturing capacity within the United States as part of their latest vision speech they held at their American headquarters.
One of the goals behind this push is to challenge Asian dominance in microelectronics manufacturing. This is of concern since most of the silicon used in today’s electronics is being manufactured in Taiwan. Here, if political tensions between China and Taiwan escalate, it could lead to disaster for IT and allied industries including the automotive, aerospace and defence sector thanks to continued concentration of microelectronics manufacture there.
This will be important also for vehicle manufacturers and the like as well as computer and consumer-electronics manufacturers
It also has been underscored by the recent shortage of advanced microelectronics components. This is impacting the manufacture of finished computer hardware products but also is impacting the manufacturer of other products like cars that effectively have their own computers. For example some vehicle builders were even keeping finished cars at their factories until certain silicon chips are available before they could release them to the dealerships.
Intel intends to set up and open two new semiconductor factories in Arizona and mot just use them for Intel’s own microelectronics products. Here, they will be capable of working as semiconductor foundries who manufacture silicon chips for other vendors who are typically “fabless” semiconductor manufacturers like Qualcomm or Apple who outsource their actual manufacturing.
Intel will undertake further work to open up factories within the USA and Europe with the goal of tipping the scales in favour of these areas when it comes to manufacturing advanced silicon. It will underscore these countries’ sovereignty when it comes to advanced microelectronics manufacture allowing them to make their own cutting-edge technology from the drawing board to the finished product themselves.
Another direction that Intel sees for their silicon design and manufacture is for them to license out Intel’s intellectual property to third parties to add value to or turn in to finished product. It will also mean that Intel’s factories will end up making silicon based on RISC-based microarchitectures like the open-source RISC-V technology or the established ARM technology.
If Intel gets this idea up off the ground, it could be a chance for semiconductor foundries capable of advanced microelectronics manufacturer to appear within USA, Canada, Europe and Australasia. This will help these countries with industries dependent on this kind of technology like green tech, consumer electronics or transport.
A trend that is increasing in relationship to software maintenance and quality assurance is to assure the ubiquitous availability of critical security, software-quality and compliance updates for a device or program. This is through delivering such updates under separate cover from major updates that primarily add features and functionality.
You may think of these critical updates as just security patches for the device or program but these can include general bugfixes, software refinements to to have the program run more efficiently or compliance modifications such as to update daylight-saving-time rules for a particular jurisdiction.
Microsoft, Google and Apple headed that way with Windows 10, Android and with MacOS respectively. This approach benefits the software developer and the user equally because the security, software-quality or compliance patches are usually small files. The software developer can assure guaranteed delivery and installation even with older devices that aren’t able to take newer versions of the software thus hardening the device’s platform against security exploits.
Similarly the user can choose not to install a functionality update if they don’t see fit or may find that it offers a steep learning curve due to significant user-experience changes. It is more so where a user would rather run with a highly-stable version of the operating system than the latest “rushed-out” version that carries bugs.
Apple will be taking this approach with iOS soon. Previously, the iOS mobile operating system was maintained using the delivery of major versions offering major functionality. But Apple would deliver iOS bugfixes and security patches as a minor or “point” version dependent on a major version, something that was considered orthodox in the world of software maintenance and quality assurance.
But if they were to “reach” older iOS versions with a security or compliance update, they would need to offer a minor or “point” version for a prior major version as a separate software package. This is an issue that affects people who maintain older iOS devices, especially iPads or iPod Touch devices that are less likely to take newer major versions of iOS.
Through the development of iOS 14.5, Apple has looked in to the idea of “splitting” the critical updates from the main software package so that these can be delivered under separate cover. This could also allow Apple to package one of these updates to touch multiple major versions of the operating systems.
It could also be a chance for Apple to see a long service life out of iOS devices especially where older devices may not run the latest major version of iOS. This would be very applicable to iPad and iPod Touch users who see long-term use out of those devices or families who pass down older iPhones to their children. It could also be a chance for Apple to keep multiple hardened codebases for iOS going but able to support different device abilities.
It will also encourage Apple to deliver frequent software patches to iOS users especially if they can be installed without restarting the device. This is more so if Apple wants to create a tighter software-quality-assurance regime for their platforms.
But Apple also has to provide separate critical-update delivery to their tvOS operating system which drives their recent Apple TV devices and their watchOS operating system that drives their Apple Watch products. It can then be about creating a robust software quality-assurance approach across all of their products but catering to people who maintain older products.
A very common issue affecting multiple-premises buildings like apartment blocks, office blocks and shopping centres is the provision of wireline telecommunications infrastructure through these buildings to serve tenants or lot owners who want to benefit from services offered through the infrastructure. Here, there can be problems regarding the landlord or other powers-that-be who have oversight of the building accepting the installation of such infrastructure.
The United Kingdom are facing this problem with their large multi-premises buildings but in a particular way. There, most of these buildings are owned by a single landlord who leases out each premises i.e. an apartment or retail / office space to a tenant in exchange for monthly rent. But the landlords tend to gain a lot of “clout” when it comes to permitting infrastructure to be deployed through a building.
What has been happening with deployment of next-generation broadband infrastructure in these buildings is that some landlords are not responding to requests regarding this infrastructure existing in their buildings. This is compared to most landlords taking up the offer on next-generation broadband through their building due to this giving the building or the lettable space more marketable value.
It is seen as an aggravating issue as multiple regional broadband infrastructure providers are setting up shop in different villages, towns and cities across the country in order to provide cost-effective Gigabit internet service to its citizens.
A new law, the Telecommunications Infrastructure (Leasehold Property) Act 2021, has been enacted through the whole of the UK to answer this matter. This allows a telecommunications infrastructure network provider to deploy broadband infrastructure through a multiple-premises building or similar leasehold building.
It facilitates an improved tribunal-based dispute-resolution mechanism as well as an obligation on landlords to facilitate the deployment of digital infrastructure through their buildings. These actions come in to play when the landlord has repeatedly failed to respond to requests from an ISP to install a broadband connection that the tenant has requested.
A lot of the talk of this law was focusing on pure-play residential developments i.e. apartment blocks and towers. But there is effectively the idea to extend the scope of this law to cover commercial-focused developments like office blocks and shopping centres. I also see this encompassing mixed-use developments that have commercial and residential premises, as is increasingly the trend especially with apartment blocks having the ground floor or the first few floors having commercial or retail premises.
Of course, the questions that come up include who assumes responsibility for the installation and maintenance of any infrastructure between the communications room and the individual premises. It also includes whether that infrastructure belongs to the landlord or the network provider.
It will undergo periodic review and refinement processes as what a well-oiled legislative instrument should be doing. But I also see this benefiting network infrastructure operators who serve dense urban areas where many large apartment blocks and high-rise developments exist.
An issue that has to be looked at during this review cycle is situations where multiple network infrastructure providers approach a building’s landlord and seek to arrange connection. Here, it will be about whether unnecessary duplication of “communications-closet to premises” infrastructure should take place especially if such infrastructure is of the same medium like optical fibre, RF coaxial cable or Cat5 Ethernet. It is a situation that will come about as the Internet service becomes more competitive in the UK’s urban areas and multiple service providers will knock on a landlord’s door or tout tenants for their services.
Then there will be the question of whether a landlord must rent out roof space on their multiple-premises building for RF-based communications services like 5G small-cell base stations, digital-broadcasting infill repeaters or business-radio transmitters. This question will be distinct due to the building’s premises tenants not directly benefiting from the infrastructure and will encompass the installation of associated power and wireline backhaul infrastructure.
At least there are processes in place to make sure that large multiple-premises buildings in the UK will benefit from ultrafast broadband Internet services.
Now with three major desktop computing platforms and two mobile computing platforms on the market, there is a demand to create software that can run on all of them. It also means that the software has to operate in a manner that suits the different user experiences that different computing devices offer.
.. and tablets
The differing factors for the user experiences include screen size and general aspect ratio as in “portrait” or “landscape”; whether there is a keyboard, mouse, stylus or touchscreen as a control interface; or, nowadays, whether there are two or more screens. Then you have to think of whether to target a mobile use case or a regular-computer use case and optimise your software accordingly. You may even have to end up targeting “small mobile” (smartphone), “large mobile” (iPad or similar tablet), “desktop” (desktop or laptop computer including 2-in-1 convertibles) or “lean-back” (smart TV / set-top / games console) use cases at once.
.. and laptops with the same codebase
Google and Microsoft have established a partnership to make Google’s Flutter 2 software development platform as something to create desktop+mobile software solutions. It is building on Microsoft’s foundation stones like their BASIC interpreters which got most of us in to personal computing and software development.
Here it is about creating common codebases for native apps that target iOS, Android, Windows 10, MacOS and Linux; alongside Web apps to work with Chrome, Firefox, Safari and Edge. But the question that could be raised is if an app is targeted for Google Chrome, would this work fully with other Chromium-based Web browsers like the new Microsoft Edge browser, the Opera browser or Chromium for Linux.
The creation of Web apps may be about being independent of platform app stores which have a strong upper hand on what appears there. Or it may be about reaching devices and platforms that don’t have any native software development options available to average computer programmers.
Some of the targeted approaches for this new platform would include “progressive Web apps” that can run on many platforms using Web technology and omit the Web-browser “chrome” while these apps run.
The new Flutter 2 platform will also be about creating apps that take advantage of multiple-screen and foldable setups. This is in addition to creating fluid user interfaces that can run on single-screen desktop, tablet and smartphone setups. The idea of creating a user interface for multiple-screen and foldable setups is seen as catering to a rare use case because of fewer foldable devices like the Microsoft Surface Duo on the market let alone in circulation. Another question that can crop up is multiple-screen desktop-computing setups and how to take advantage of them when creating software.
What I see of this is the rise of software-development solutions that are about creating software for as many different computing platforms as possible.
Now they have laid on more features for Chrome OS with Build 89. One of these is to interlink your Android smartphone with your Chromebook. Here, this feature called Phone Hub offers the ability to mute or “ping” your phone from the Chromebook or enable the Android hotspot function. You have tbe ability to “hand-over” Websites you started browsing on your Android phone to you Chromebook’s display. It doesn’t seem to offer yet the ability to continue “chatting by SMS” from your Chromebook or move photos you took with your Android phone to your Chromebook yet, but it could be seen as a future direction for Phone Hub,
There is the ability to sync the list of Wi-Fi networks that you have set your Android phone or Chromebook up with so that both devices have the SSIDs and passwords for all these networks.
Android and Chrome OS now support Google’s “Nearby Share” across-the-room data transfer so you could move that photo or PDF between your Android phone, Android tablet or Chromebook without using the Internet. The same goes for when another Android or Chrome OS user has that Weblink or photo they want to share with you and you want to see it on your Chromebook.
There is the ability for Chrome OS to remember the last five “copy-and-paste” Clipboard entries. This is taken advantage of if you press the “Everything” key (concentric rings or magnifying glass key) and the V key together to dump everything from the last five “cuts” or “copies” to your document.
The “Tote” bas been added to Chrome OS’s file manager’s default view to bring up frequently-used, new or “pinned” files. For Windows users, this is similar to the Quick Access screen which shows frequently-touched folders or files you have recently touched.
There is an option to have the context menu show up further relevant information about something you highlighted and right-clicked. This will allow you to bring up options like unit, currency or time-zone conversions, definitions or translations in the context menu.
Google has even worked on the lock screen further by allowing you to customise it further. This will include having it interactive in the context of media controls and similar functionality.
It is part of newer directions for Google’s Chrome OS desktop operating system. For example, there will be a direction for Google to offer meaningful functionality updates to Chromebook users every month. But I see issues with this approach where buggy conde can be rushed in to Chrome OS in order to get that PR-worthy feature in to the operating system.
Another issue is to make the Chrome OS platform a long-tailed desktop computing platform like what happens with Windows and MacOS. There were concerns about older Chromebooks missing out on Chrome OS updates due to arbitrary cut-off time periods like five years to eight years from manufacture. It was affecting people who purchased second-hand Chromebooks or were taking advantages of seasonal specials where manufacturers were offloading surplus prior-generation Chromebook inventory at cheap prices.
For subsequent Chrome OS builds, Google will revise the policies regarding end-of-support when dealing with older equipment. This may be about code availability for longer than 8 years from manufacturer or to cater towards Windows-like hardware / software independence when it comes to continual support for that platform.
Here, Google will work with computer manufacturers to answer this problem. For example, they have to ship Chromebooks with a realistic long support life and OEMs have to have Chrome OS equipment capable of having very long service lives like what is the norm with Windows for example. Google will even work out a way to push the latest code in to Chromebooks even at the browser level if not the operating-system level.
They also have a view to bring back other form factors like the Chromebase “all-in-one” and the three-piece “Chromebox” form factors. Here, it is to prove that Chrome OS isn’t just about cut-price mass-market laptops anymore.
It shows that Google is seeing Chrome OS as a fully-fledged mass-market “open-frame” platform for regular desktop and laptop computers. What needs to still happen is for more software including rich powerful software like games to be written to run natively in this platform.
Google’s effort with Chrome OS and the Chromebook platform may see us heading to the days of the late 1980s when there were three dominant desktop/personal computing platform i.e. IBM-based computers with MS-DOS, the Apple Macintosh and the Commodore Amiga. But compared to that era, more hardware vendors will offer computers for both the Windows and Chrome OS platforms rather than platforms being based around hardware and software offered by a particular vendor.
The Thunderbolt high-throughput data connection specification that Intel launched and pushed with Apple’s help has turned 10 this year. And a laptop that I reviewed on this site nearly 10 years ago gave a sign of things to come when it comes to how Thunderbolt is being implemented today.
This (Sony) VAIO Z ultraportable notebook with its accompanying Blu-Ray writer media dock used a technology that has defined the Thunderbolt standard, especially Thunderbolt 3.
When I reviewed the Sony VAIO Z ultraportable laptop during 2012, I was dabbling with a technology that would be known as Thunderbolt. This was the Intel Light Peak technology that was adapted for copper connectivity but was to be known as Thunderbolt. But this setup underscored what Thunderbolt 3 would be about as a popular use case.
This computer setup had a “Media Dock” expansion module with an integrated Blu-Ray writer, a USB 2 connection, a USB 3 connection, Gigabit Ethernet connectivity, and HDMI and VGA outputs for a TV or monitor. But this “Media Dock” also served as an external graphics module for the Sony VAIO Z Ultrabook. These devices were connected using an Intel Light Peak cable which had a USB Type-A connector that plugged in to the host computer, but to safely detach the expansion module, you had to press a button on the USB plug and wait a moment before you could disconnect the laptop.
Here this setup which I used in 2012 underscored the use case for what Thunderbolt 3 over USB-C and newer generations of this connection would be about. It was about a high-speed connection between a laptop, all-in-one or low-profile desktop computer and an expansion module of some sort. That expansion module would power a laptop computer but provide connectivity to a cluster of peripherals connected to it, house data-storage media of some sort and / or have better graphics processing horsepower within.
Thunderbolt 3 is the preferred connection on the current range of Dell XPS ultraportable premium laptops
Initially this technology appealed to workstation-based use of Apple Macintosh computers that were being used by people involved in film and video production. Here, this was about RAID disk arrays being worked as “scratch disks” for rendering edited video footage or digitally-created animations. Or it was about high-resolution screen setups necessary as part of editing workstations. It also appealed as a path to bring in raw video footage from cameras after a day’s worth of filming in order to prepare “daily rushes” for review by producers and directors, or edit the footage in to a finished product.
The technology finally evolved to become Thunderbolt 3 then Thunderbolt 4 which worked not on its own connector type but using the USB-C connector. That made for a high-speed cost-effective implementation of this standard. As well, the bandwidth has be multiplied by 4 to allow more data to flow.
The Dell WD19TB Thunderbolt 3 dock is an example of what this standard is about
Here the USB Type-C plug underscored the docking use case that Thunderbolt 3, USB4 and Thunderbolt 4 brought on. This became a real advantage with designing “thin and light” ultraportable laptops so these computers have a slimline look yet can be connected to workspaces that use docks based on these standards.
Razer Core external graphics module with Razer Blade gaming laptop – what Thunderbolt 3 is about
The external graphics module that this specification encouraged has maintained a strong appeal with gamers but I often see these devices as opening paths towards “fit-for-purpose” computing setups with enhanced graphics power based around ultraportable or cost-effective computers. This is more so with the latest Intel integrated graphics silicon offering more than just very limited “economy-class” graphics abilities.
What Intel needs to do is to make Thunderbolt 4 and subsequent generations become more ubiquitous as a high-throughput “equivalent to PCIe” wired connection between computer and peripheral.
Here this could be about affordable laptops and all-in-ones equipped with at least one Thunderbolt 4 port along with Intel-silicon motherboards for traditional desktop computers using this same connector. As well, Intel needs to keep the Thunderbolt standard “silicon-independent” so that AMD and other silicon vendors can implement this technology. It includes the ability for ARM-based silicon vendors to implement Thunderbolt-based technology in their computing designs.
Thunderbolt 3 and 4 can even open up ideas like using “standard-form-factor” computer designs like the ATX or Mini-ITX families to create so-called “expansion chassis” setups based on these designs., opening up paths for construction of devices like external graphics modules by independent computer stores or computer enthusiasts. Or it could open up the path towards a wide variety of docks and external graphics modules that have different functionalities and specifications.
This recommendation can drive down the cost of add-on external graphics modules for those of us who want better graphics performance out of our computers some time down the track.
What Thunderbolt has meant is the rise of a very-high-throughput wired interface that can offer external devices the equivalent of what would be built in to a computer.
I have just read a Swiss article which talked about the US and Chinese hyperscale cloud platforms dominating the European cloud-computing scene. But this article is stating that European cloud-computing / online-service providers are catching up with these behemoths. Here these companies are using data protection as a selling point due to data-protection and user-privacy concerns by European businesses and government authorities.
A recent survey completed by the French IT consultant Capgemini highlighted that the German-speaking part of Europe (Germany, Australia and Switzerland) were buying minimal European IT services. But the same Capgemini survey were saying that 45 of the respondents wanted to move to European providers in the future thanks to data protection and data sovereignty issues.
Data security is being given increasing importance due to recent cyber attacks and the increased digitalisation of production processes. But the Europeans have very strong data protection and end-user privacy mandates at national and EU level thanks to a strong respect for privacy and confidentiality within modern Europe.
COVID-19 had placed a lot of European IT projects on ice but there has been a constant push to assure business continuity even under the various public-health restrictions mandated by this plague. This includes the support for distributed working whether that be home-office working or remote working.
But how is this relevant to European households, small businesses and community organisations? I do see this as being relevant due to the use of various online and cloud IT services as part of our personal life thanks to the like of search engines, email / messaging, the Social Web, online entertainment, and voice driven assistants. As well, small businesses and community organisations show interest in online and cloud-based computing as a means of benefiting from what may be seen as “big-time” IT without needing much in the way of capital expenditure.
It will be a slow and steady effort for Europe to have online and cloud computing on a par with the US and Asian establishment but this will be about services that respect European privacy, security and data-sovereignty values.
The recent saga involving Facebook denying Australian users access to news content has shown up a requirement to have multiple paths for following your daily news online from respected sources. This is more so where we want to use a single-view approach to aggregating content from multiple news services.
Often the solution is to subscribe to multiple email newsletters or load your mobile devices with news apps provided by multiple news outlets. This becomes an issue if you follow multiple news outlets and you want that “aggregated news view” from the different outlets. It means to have in the one screen view a list of headlines or articles from multiple sources in the one screen view.
The technology that I see regaining currency for this goal is RSS or “Really Simple Syndication” Webfeeds. This is where a Website provides a special always-updated XML file representing new and updated content.
This technology had a lot of currency in the early 2010s with popular Web browsers having RSS Webfeed management with that particular orange Webfeed icon. At the same time, Google ran an online RSS Webfeed reader but its death had sidelined the popular takeup of this technology.
RSS Webfeeds are still being used as a way to syndicate and synchronise content across the Web. Here, most Websites like this HomeNetworking01.info Website you are reading use this approach to facilitate “master” synchronisation for applications like email newsletters or content discovery such as “sitemaps”. It is also used very heavily in the podcast ecosystem to alert users when they have new episodes available of a podcast they subscribe to.
There are still some RSS feed readers out there that are worth using so you can craft your own personal newsfeeds.
Two methods of operation
Feedly – an example of an online Webfeed reader that shows a custom newsfeed
But I see these Webfeed readers and podcast managers as falling in to two different categories based on where the subscription and synchronisation data is held. Such data represents the Webfeeds you are following and what articles you have read or podcasts you have listened to.
The first type is the “device-based” reader that keeps this data on the device that the user uses to read these Webfeeds or listen to podcasts. Examples can include email clients with RSS Webfeed reader functionality, Web browsers wtih built-in RSS Webfeed reader functionality or Webfeed reader and podcast manager apps that don’t work with an online-based user account.
In this case, anything you have read on that device is deemed read as far as that device is concerned. As well, if you add or delete Webfeeds on that device, these changes only apply to that device.
The second type is the online reader that is associated with a remote online backend of some sort. This can be a purpose-built online news aggregator or podcast manager that has an online infrastructure built up by its vendor. Or it could be part of an email or similar service that integrates RSS Webfeed management and is tied to the user’s service account. That also encompasses business cloud-computing backends offering this function or a NAS or file server that runs RSS feed-manager or podcast-manager software.
This type of RSS feed reader will be increasingly seen as the way to go for managing and viewing RSS Webfeed and podcast collections. This is due to most of us having multiple computing devices of some form or another, with a desire to view our Webfeeds across the different devices.
A popular example of this is Feedly which works on a freemium approach with a generous free-use allowance. This uses a Web view but has native clients for mobile platforms. It also has social sign-on for Google and Facebook accounts as well as the ability to create a unique acclount.
Here, it would work with user accounts that hold Webfeed subscription and synchronisation details. The end-users gain access to these feeds through a Webpage or a first-party or third-party native software client written for that end-user’s platform. This approach supports multi-device use in such a way that what is read on one device is deemed read on other devices. As well, if you add or delete Webfeeds on your account, these changes are reflected on all devices associated with your account.
What to look for in an RSS feed reader
A well-designed RSS feed reader should allow you to group feeds in a hierarchical order. This may come in handy to make it easier to organise your Webfeeds based on common factors like country source, interest or whatever.
For online feed readers, you need to make sure there is a client app that suits your device properly. This includes a user interface commensurate to your device type be it a smartphone, tablet or regular computer.
How to discover a Webfeed
You may find that your Web browser has support for detecting RSS Webfeeds. This may be in the form of a button that glows orange when the browser detects a Webfeed. This then opens up the Webfeed in your browser or you may find that your Webfeed reader app launches so you can add the Webfeed to that site.
Desktop Web browsers based on Google Chrome and have access to the Google’s app store for Chrome extensions can run a Chrome extension that detects RSS Webfeeds.
The popular news Websites will have a page which shows what Webfeeds are available from their Website. Here, you can then click on these Weblinks to open these feeds or right-click on each feed to copy them in to your feed reader.
Let’s not forget that most RSS feed readers will have a Webfeed-discovery option where you enter your site’s URL in to a “search” dialog box. This will cause the feed reader to show you what feeds are available so you can add them to your feed list
What needs to happen
There needs to be a number of polished capable online RSS feed reader services that are made aware to business and consumers in order to allow people to make their own news views effectively. As well, there has to be a return to simplified Webfeed discovery for news Websites.
The cohort of smart TVs, set-top boxes and the Internet of Things needs to come on board the RSS Webfeed bandwagon as much as regular and mobile computing devices. This could be in the form of a Webfeed-driven “teletext” experience for smart TVs or simply smart displays of the Amazon Echo Show or Google Smart Hub ilk being able to show your custom RSS-driven news feeds at your command.
It is still worth remembering that the RSS Webfeed is still to be valued as an information service in its own roght. It is more so as a way to create your own custom news views without relying on the big names of the Social Web to provide that feed.
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.