There is a constant data-security and user-privacy risk associated with mobile computing.
And this is being underscored heavily as a significant number of mobile apps are part of “app-cessory” ecosystems for various Internet-of-Things devices. That is where a mobile app is serving as a control surface for one of these devices. Let’s not forget that VPNs are coming to the fore as a data-security and user-privacy aid for our personal-computing lives.
Expect this to appear alongside mobile-platform apps to signify they are designed for security
But how can we be sure that an app that we install on our smartphones or tablets is written to best security practices? What is being identified is a need for an industry standard supported by a trademarked logo that allows us to know that this kind of software is written for security.
A group called the Internet of Secure Things Alliance, known as ioXT, have started to define basic standards for secure Internet-of-Things ecosystems. Here they have defined various device profiles for different Internet-of-Things device types and determined minimum and recommended requirements for a device to be certified as being “secure” by them. This then allows the vendor to show a distinct ioXT-secure logo on the product or associated material.
Now Google and others have worked with ioXT to define a Mobile Application Profile that sets out minimum security standards for mobile-platform software in order to be deemed secure by them. At the moment, this is focused towards app-cessory software that works with connected devices along with consumer-facing privacy-focused VPN endpoint software. For that matter, Google is behind a “white-box” user-privacy VPN solution that can be offered under different labels.
This device profile has been written in an “open form” to cater towards other mobile app classes that need to have specific data-security and user-privacy requirements. This will come about as ioXT revises the Mobile Application Profile.
The ioXT Internet-of-Secure-Things platform could be extended to certifying more classes of native mobile-platform and desktop-platform software that works with the Internet of Everything. The VPN aspect of the Mobile Application Profile can also apply to native desktop VPN-management clients or native and Web software intended to manage router-based VPN setups.
At least a non-perpetual certification program with a trademarked logo now exists for the Internet of Everything and mobile apps to assure customers that the hardware and software is secure by design and default.
– Free version has ads, in-app purchase for premium version
The idea of using a regular TV as the electronic equivalent of a chalkboard (blackboard) or whiteboard has been explored through the 1980s thanks to a few key drivers.
A use case that was being put forward was to work with the then-new hobby of home videography thanks to the arrival of affordable video cameras and portable video recorders. Here it would be about creating title cards for one’s home video projects. As well, third-party peripheral vendors created light-pen setups that work with various home-computer platforms like the Commodore “VIC” computers (VIC-20 and Commodore 64), the Tandy TRS-80 Model 1 and the BBC Micro. The software that came with these setups included an elementary “paint” program that worked with the light-pen and allowed the (low-resolution) drawings to be saved to the computer’s secondary-storage medium (cassette or floppy disk) or printed to a connected printer.
The mouse, along with various graphics programs for later computer platform, extended the concept further even though newer computers were hooked up to displays better than the average TV set.
But the concept has been revived using the CastPad app for Android. This app allows you to draw using your finger or stylus on your Android smartphone or tablet, then “cast” it to your TV or monitor that is connected to a Chromecast or has full Chromecast ability built in. There is also the ability to “cast” to other Android devices running the same software and connected to the same logical network that the Chromecast is connected to.
You can save what you drew to your Android device but I am not sure whether it supports printing via Android’s print functionality. There is a free ad-supported version that is limited to five colours. It may be good enough to show to a child or use for games like Pictionary. But a premium version, which you can purchase through an in-app arrangement allows for infinite colours and a few more features.
A use case that was called out in the article was to improve a family Pictionary game that the article’s author played during their family’s Thanksgiving celebrations. Here, they had a Chromecast connected to their family home’s TV and used their Android smartphone to draw out the word ideas as part of gameplay.
But the app has other use cases such as conference facilities, classrooms and the like that are kitted out with a large-screen TV or video projector. Here, the CastPad app may work as a better approach to illustrating concepts in a basic manner and showing them to a larger audience as part of your presentation effort.
Apple could easily answer this app with something that runs on an iPhone or iPad and uses AirPlay to stream the canvas to an Apple TV. Or the app developers could simply port it to iOS to take advantage of that platform’s user base.
Similarly, there could be the ability to have you draw the graphic on the smartphone or tablet then project it through the Chromecast, which can be useful if you are preparing that diagram for a class. This can also be augmented with the ability to insert printed text in a range of font sizes, something that would appeal to “blackboard diagrammers”.
Apps like CastPad can exploit “screencasting” setups like AirPlay or Chromecast to turn the largest screen in the house or business in to an electronic whiteboard and the touchscreen of your device in to a “canvas”.
The news cycle that you hear in the USA has been primarily focused on bad news especially with what President Trump is up to or some natural disaster somewhere around the world. It is a very similar issue that is happening around the world. A common issue that is drawn out regarding this oversaturation of bad news is that it can bring about fear, uncertainty and doubt regarding our lives with some entities taking advantage of it to effectively manipulate us.
Some of us follow particular blogs or Facebook pages that contain curated examples of good news that can break this monotony and see solutions for the highlighted problems. But Google is extending this to a function they are building in to the Google Assistant platform with stories that are curated by people rather than machines and, in a lot of cases, derived from a variety of media sources. But this is facilitated by the Solutions Journalism Network non-profit which is more about solution-focused media coverage.
Of course, there will be the doubters and skeptics who will think that we aren’t facing reality and are dwelling in the “Hippie Days” and the “Jesus People” era of the 1960s and early 1970s. But being able to come across positive solutions for the various problems being put forward, including people working “outside the box” to solve that problem can inspire us.
This is a feature is offering on an experimental basis through the USA only and can be used on your Google Home or other Google-Assistant devices. But as this application is assessed further, it could be easily made available across more countries.
A question that is appearing for Android users is whether software developers can sell software independently of Google Play
Over the last few months, Epic Games released their Android port of Fortnite in a manner that is very unusual for a mobile-platform app. Here, they released this port of the hit game as an APK software package file that is downloaded from their Website and installed on the user’s Android device as if you are installing a program on a regular Windows or MacOS computer. This allows them to maintain control over the sale of game additions and similar merchandise without having to pay Google a cut of their turnover. Or it could allow them to maintain control over the software’s availability such as issue beta or pre-release versions of software or simply offer high-demanding software like action games to devices known to perform at their best with the software.
The Android platform has a default setting of disallowing software installations unless they come from the Google Play Store or the device manufacturer’s app store. This is a software-security setting to prevent the installation of software that has questionable intent on your Android device. But the “regular” computer platforms have implemented other approaches to allow secure installation of software thanks to their heritage of being able to install software delivered on package media or from download resources like the software developer’s Website or a download site. It also caters towards the role that regular computers play in the course of business computing where line-of-business software is being installed on these systems by value-added resellers and solutions providers.
This question will become more real as the Android platform is taken beyond mobile devices and towards the smart TV like with NVIDIA Shield or recent Sony smart TVs. It could also appeal towards other “smart devices” like network printers that are based on the Android software codebase where there is a desire to add functionality through an app store.
Recent efforts that Microsoft, Apple and the open-source community have taken to protect our regular computers against include software-authenticity certification, least-privilege execution, sandboxing and integrated malware detection. In some cases, there is the ability for users to remove software-authenticity certificates from their regular computer in case questionable software was deployed as highlighted with the Lenovo Superfish incident.
Similarly, these operating system vendors and many third parties have developed endpoint-security software to protect these computers against malware and other security threats.
Google even introduced the Google Play Protect software to the Android platform to offer the same kind of “installed malware” detection that Windows Defender offers for the Windows platform and Xprotect offers on the MacOS platform. Samsung even implements Knox as an endpoint-protection program on their Android devices.
Android does maintain its own app store in the form of the Google Play Store but allows device manufacturers and, in some cases, mobile-phone service providers to create their own app store, payment infrastructure and similar arrangements. But it is difficult for a third-party software developer to supply apps independent of these app stores including creating their own app store. This is more so for app developers who want to sell their software or engage in further commerce like selling in-game microcurrency without having to pay Google or others a cut of the proceeds for the privilege of using that storefront.
Android users can install apps from other sources but they have to go in to their phone’s settings and enable the “install unknown apps” or a similar option for them to install apps from sources other than the Google Play Store or their OEM’s / carrier’s app store.
What could be done for the Android platform could be to support authenticated software deployment that uses the same techniques as Microsoft and Apple with their desktop and server operating systems. It can also be augmented with the creation of authenticated app-stores to allow software developers, mobile carriers, business solutions providers and the like to implement their own app stores on the Android platform. The authentication platform would also require the ability for end-users to remove trusted-developer certificates or for certificate authorities to revoke these certificates.
It could allow for someone like, for example, Valve or GOG to operate a “Steam-like” storefront which is focused towards gaming. Or an app developer like Microsoft could use their own storefront to sell their own software like the Office desktop-productivity suite. Then there are people courting the business segment who want to offer a hand-curated collection of business-focused apps including line-of-business software.
But there would have to be some industry-level oversight regarding certified apps and app stores to make it hard for questionable software to be delivered to the Android ecosystem, This also would include app stores having to make sure that their payment mechanisms aren’t a breeding ground for fraud in its various forms.
There will be the common question that will crop up regarding alternative app stores and developer-controlled or third-party-controlled app-level certification is the ability to purvey apps that have socially-questionable purposes like gambling or pornography. Here, the Android ecosystem will have to have the ability to allow end-users to regulate the provenance of the software installed on these devices.
At least the Fortnite software-distribution conversation is raising questions about how software is delivered to the Android mobile-computing platform and whether this platform is really open-frame.
There have been some recent situations where YouTube has become arrogant with how they treat end-users, content creators and advertisers thanks to their effective monopoly position for user-generated video content. One of these was a fight that Google and Amazon got into over voice-driven personal assistants and this led to Google removing YouTube support from Amazon’s Echo Show smart display. I even wrote that it is high time that YouTube faces competition in order to lift its game.
But Instagram, owned by Facebook, have set up their own video-sharing platform called IGTV. This will be available as a separate iOS/Android mobile-platform app but also allow the clips to appear on your main Instagram user experience.
Initially this service will offer video in a vertical format for up to 1 hour long. The format is chosen to complement the fact that it is likely to be used on a smartphone or tablet that is handheld. The one-hour length will be offered to select content creators rather than to everyone while most of us will end up with 10 minutes. This may also appeal to the creation of “snackable” video content.
Currently Instagram offers video posting for 60 seconds on its main feed or 15 seconds in its Stories function. This is why I often see Stories pertaining to the same event having many videos daisy-chained.
The IGTV user experience will have you immediately begin watching video content from whoever you follow on Instagram. There will be playlist categories like “For You” (videos recommended for you), “Following” (videos from whom you follow), “Popular” (popular content) and “Continue Watching” (clips you are already working through).
The social-media aspect will allow you to like or comment on videos as well as sharing them to your friends using Instagram’s Direct mode. As well, each Instagram creator will have their own IGTV channel which will host the longer clips.
A question that can easily come up is whether Instagram will make it work for usage beyond mobile-platform viewing. This means support for horizontal aspect ratios, or viewing on other devices like snart-display devices of the Echo Home ilk, regular computers or Smart TV / set-top devices including games consoles.
It is an effort by Instagram and Facebook to compete for video viewers and creators but I see the limitation to the vertical format as being a limitation if the idea is to directly compete with YouTube. But Facebook and Instagram need to look at what YouTube isn’t offering and the platforms they have deserted in order to provide an edge over them.
A common frustration that we all face when we play video games on a laptop, tablet or smartphone is that these devices run out of battery power after a relatively short amount of playing time. It doesn’t matter whether we use a mobile-optimised graphics infrastructure like what the iPad or our smartphones are equipped with, or a desktop-grade graphics infrastructure like the discrete or integrated graphics chipsets that laptops are kitted out with.
What typically happens in gameplay is that the graphics infrastructure paints multiple frames to create the illusion of movement. But most games tend to show static images for a long time, usually while we are planning the next move in the game. In a lot of cases, some of these situations may use a relatively small area where animation takes place be it to show a move taking place or to show a “barberpole” animation which is a looping animation that exists for effect when no activity takes place.
Microsoft is working on an approach for “painting” the interaction screens in a game so as to avoid the CPU and graphics infrastructure devoting too much effort towards this task. This is a goal to allow a game to be played without consuming too much power and takes advantage of human visual perception for scaling frames needed to make an animation. There is also the concept of predictability for interpreting subsequent animations.
But a lot of the theory behind the research is very similar to how most video-compression codecs and techniques work. Here, these codecs use a “base” frame that acts as a reference and data that describes the animation that takes place relative to that base frame. Then during playback or reception, the software reconstructs the subsequent frames to make the animations that we see.
The research is mainly about an energy-efficient approach to measuring these perceptual differences during interactive gameplay based on the luminance component of a video image. Here, the luminance component of a video image would be equivalent to what you would have seen on a black-and-white TV. This therefore can be assessed without needing to place heavy power demands on the computer’s processing infrastructure.
The knowledge can then be used for designing graphics-handling software for games that are to be played on battery-powered devices, or to allow a “dual-power” approach for Windows, MacOS and Linux games. This is where a game can show a detailed animation with high performance on a laptop connected to AC power but allow it not to show that detailed animation while the computer is on battery power.
On this Website, I have previously covered how certain technologies that work with our smartphones are being used to verify the authenticity and provenance of various foodstuffs and premium drinks.
It has been in the form of NFC-enabled bottle tops used on some premium liquor along with smartphone apps to determine if the drink was substituted along with the supplier being able to provide more information to the customer. In France, the QR code has been used as a way to allow consumers to identify the provenance of processed meat sold at the supermarket in response to the 2013 horsemeat scandal that affected the supply of processed beef and beef-based “heat-and-eat” foods in Europe.
The problem of food and beverage adulteration and contamination is rife in China and other parts of Asia but has happened around other parts of the world such as the abovementioned horsemeat crisis and there is a perpetual question for the US market regarding whether extra-virgin olive oil is really extra-virgin. It can extend to things like whether the produce is organic or, in the case of eggs or meat, whether these were free-range or not. This has led various technologists to explore the use of IT technologies to track the authenticity and provenance of what ends up in our fridges, pantries or liquor cabinets.
The latest effort is to use blockchain which is the “distributed ledger” technology that makes bitcoin, Ethereum and other cryptocurrencies tick. This time, it is used in conjunction with NFC, QR codes and mobile-platform native apps to create an electronic “passport” for each packaged unit of food or drink. This was put together by a Chinese-based startup who created this technology in response to a cat belonging to one of the founders needing to go to the vet after eating contaminated food that the founder had bought from an eBay-like online market based in China.
The initial setup has a tamper-evident seal wrapped around the tin or other packaging with this seal having an NFC element and a QR code printed on it. A smartphone app is used to scan the QR code and it uses the NFC element which fails once the seal is broken to verify that the seal is still intact. Once this data is read on the mobile device, the food item’s electronic “passport” then appears showing what was handled where in the production chain.
At the moment, the seal is like a hospital bracelet which is sturdy enough to be handled through the typical logistics processes but is fragile enough to break if the food container is opened. This could work with most packaged foodstuffs but food suppliers could then design this technology to work tightly with their particular kind of packaging.
The blockchain-driven “passport” could be used to identify which farm was used as the source of the produce concerned, with a human-readable reference regarding the agricultural techniques used i.e. organic or free-range techniques being used. In the case of processed meat and meat-based foods, the technology can be used to verify the kind and source of the meat used. This is important for religious or national cultures where certain meats are considered taboo like the Muslim and Jewish faiths with pig-based meats, British and Irish people with horsemeat or Australians with kangaroo meat.
Once the various packaging-technology firms improve and implement these technologies, it could facilitate how we can be sure that we aren’t being sold a “pig in a poke” when we buy food for ourselves or our pets.
Google has answered the setup method that Apple has implemented for their AirPod wireless in-ear headset by implementing a software-driven “quick-pair” setup that will be part of Android.
This method, called Bluetooth Fast Pairing, works on Android handsets and other devices that run Android 6.0 Marshmallow onwards and have Google Play Services 11.7 or newer installed and support Bluetooth 4.0 Low Energy (Bluetooth Smart) connectivity. You will have to enable Bluetooth and Location functionality in your handset, but you don’t have to look at Bluetooth device lists on your smartphone for a particular device identifier to complete the setup process.
Click or tap this image to see Google Fast Pairing in action
It is meant to provide quick discovery of your compliant Bluetooth accessory device in order to expedite the setup process that is involved with new devices or to “repair” Bluetooth connections that have failed. This latter situation can easily occur if data in the device regarding associated Bluetooth devices becomes corrupted or their is excessive Bluetooth interference.
The user experience will require you to put your accessory device like a Bluetooth headset, speakers or car stereo in to Bluetooth-setup mode. This may simply be through you holding down the “setup” or “pair” button till a LED flashes a certain way or you hear a distinct tone. On the other hand in the case of home and car audio equipment that has a display of some form, you using the “Setup Menu” to select “Bluetooth Setup” or something similar.
Then you receive a notification message on your Android device which refers to the device you just enabled for pairing, showing its product name and a thumbnail image of the device. Tap on this notification to continue the setup process and you may receive an invitation to download a companion app for those devices that work on the “app-cessory” model for extended functionality.
Google implements this by using Bluetooth Low Energy “beacon” technology to enable the device-discovery process. This is similar to the various beacon approaches for marketing and indoor navigation that are being facilitated by Bluetooth Low Energy, but they only appear while your accessory device is in “Bluetooth setup” mode.
The Google Play servers provide information about the device such as its thumbnail image, product name or link to a companion app based on a “primary-key” identifier that is part of the Bluetooth Low Energy “beacon” presented by the device. Then, once you tap the notification popup on your Android device, the pairing and establishment process takes place under Bluetooth Classic technology.
I see this also as being similar to the various “Plug And Play” discovery process implemented in Microsoft Windows and Apple MacOS whenever you connect newer peripherals to your computer. This is where Microsoft and Apple keep data about various peripherals and expansion cards that are or have been on the market to facilitate installation of any necessary drivers or other software or invocation of class drivers that are part of the operating system. For Google and the Android platform, they could take this further with USB-C and USB Micro-AB OTG connectivity to implement the same kind of “plug and play” setup for peripherals connected this way to Android devices.
This system could be taken further by integrating similar logic and server-hosted databases in to other operating systems for regular and mobile computer platforms to improve and expedite the setup process for Bluetooth devices where the host device supports Bluetooth Low Energy operation. Here, I would like to see it based on the same identifiers broadcast by each of the accessory devices.
The Bluetooth Fast Pairing ability that Google gave to the Android platform complements NFC-based “touch and go” pairing that has been used with that platform as another method to simplify the setup process. This is more for manufacturers who don’t have enough room in their accessory device’s design to provide an NFC area for “touch-and-go” setup thanks to very small devices or where NFC doesn’t play well with the device’s aesthetics or functionality.
It may be a point of confusion for device designers like Alpine with their car stereos who place their devices in “discoverable” or “pairing” mode all the time so you can commence enrolling your accessory device at your phone’s user interface. Here, the device manufacturer may have to limit its availability to certain circumstances like no devices paired or connected, or you having to select the “Bluetooth” source or “Setup” mode to invoke discoverability.
At least Google have put up a way to allow quicker setup for Bluetooth accessories with their Android platform devices without the need to build the requirement in to the hardware.
TV setups with large screens and powerful sound systems could also appeal to videocalls where many people wish to participate
A reality that is surfacing with online communications platforms is the fact that most of us prefer to operate these platforms from our smartphones or tablets. Typically we are more comfortable with using these devices as our core hubs for managing personal contacts and conversations.
But there are times when we want to use a large screen such as our main TV for group videocalls. Examples of this may include family conversations with loved ones separated by distance, more so during special occasions like birthdays, Thanksgiving or Christmas. In the business context, there is the desire for two or more of us to engage in video conferences with business partners, suppliers, customers or employees separated by distance. For example, a lawyer and their client could be talking with someone who is selling their business as part of assessing the validity of that potential purchase.
This is more so when there is that family special moment
But most of the smart-TV and set-top platforms haven’t been engineered to work with the plethora of online-communications platforms that are out there. This is although Skype attempted to get this happening with various smart-TV and set-top platform vendors to allow the smart TV to serve as a Skype-based group videophone once you purchased and connected a Webcam accessory supplied by the manufacturer.
The Skype situation required users to log in to the Skype client on their TV or video device along with buying and installing a camera kit that worked with the TV. This was a case of entering credentials or searching for contacts using a “pick-and-choose” or SMS-style text-entry method which could lead to mistakes. This is compared to where most of us were more comfortable with performing these tasks on our smartphones or tablets because of a touchscreen keyboard or hardware keyboard accessory that made text entry easier.
An Apple TV or Chromecast that has the software support for and is connected to a Webcam could simplify this process and place the focus on the smartphone as a control surface for videocalls
The goal I am outlining here is for one to be able to use a smart TV or network-connected video peripheral equipped with a Webcam-type camera device, along with their mobile device, all connected to the same home network and Internet connection to establish or continue a videocall on the TV’s large screen. Such a goal would be to implement the large-screen TV with its built-in speakers or connected sound system along with the Webcam as the videocalling-equivalent of the speakerphone we use for group or “conference” telephone calls when multiple people at either end want to participate in the call.
Set-top devices designed to work with platform mobile devices
A very strong reality that is surfacing for interlinking TVs and mobile devices is the use of a network-enabled video peripheral that provides a video link between the mobile device and video peripheral via one’s home network.
One of these devices is the Apple TV which works with iOS devices thanks to Apple AirPlay while the other is the Google Chromecast that works with Android devices. Both of these video devices can connect to your home network via Wi-Fi wireless or Ethernet with the Apple TV offering the latter option out of the box and the Chromecast offering it as an add-on option. As well, the Chromecast’s functionality is being integrated in to various newer smart TVs and video peripherals under the “Google Cast” or “Chromecast” feature name.
Is there a need for this functionality?
As I have said earlier on, the main usage driver for this functionality would be to place a group videocall where multiple people at the one location want to communicate with another . The classic examples would be for families communicating with distant relatives or businesses placing conference calls that involve multiple decision makers with two or more of these participants at one of the locations.
Most of the mobile messaging platforms offer some form of videocalling capability
In most cases, the “over-the-top” communications platforms like Facetime, Skype, Viber, Facebook Messenger and WhatsApp are primarily operated using the native mobile client app or the functionality that is part of the mobile platform. This way of managing videocalls appeals to most users because of access to the user’s own contact directory that exists on their device along with the handheld nature of the typical smartphone that appeals to this activity.
It is also worth knowing that some, if not all, of the “over-the-top” communications platforms will offer a “conference call” or “three-way call” function as part of their feature set, extending it to videocalls in at least the business-focused variants. This is where you could have multiple callers from different locations take part in the same conversation. Such setups would typically show the “other” callers as part of a multiple-picture “mosaic” on the screen. Here, the large screen can come in handy with seeing the multiple callers at once.
How is this achieved at the moment?
At the moment, these set-top platforms haven’t been engineered to allow for group videocalling and users would have to invoke screen-mirroring functionality on their mobile devices once they logically associate them with the video endpoint devices. Then they would have to position their mobile device on or in front of the TV so the other side can see your group, something which can be very precarious at times.
How could Apple, Google and co improve on this state of affairs?
Should this still be the way to make group videocalls on your Apple TV or Chromecast?
Apple and Google could improve on their AirPlay and Chromecast platforms to provide an andio-video-data feed from the video peripheral to the mobile device using that peripheral. This would work in tandem with a companion Webcam/microphone accessory that can be installed on the TV and connected to the set-top device. For example, Apple could offer a Webcam for the latest generation Apple TV as an “MFi” accessory like they do with the game controllers that enable it to be a games console.
When users associate their mobile devices with a suitably-equipped Apple TV / Chromecast device that supports this enhancement, the communications apps on their phone detect the camera and microphone connected to the video peripheral. The user would then be able to see the camera offered as an alternative camera choice while they are engaged in a videocall, along with the microphone and TV speaker offered as a “speakerphone” option.
What will this entail?
It may require Apple and Google to write mobile endpoint software in to their iOS and Android operating systems to handle the return video feed and the existence of cameras connected to the Apple TV or Chromecast.
Similarly, the tvOS and Chromecast platforms will have to have extra endpoint software written for them while these devices would have to have hardware support for Webcam devices.
At the moment, the latest-generation Apple TV has a USB-C socket on it but this is just serving as a “service” port, but could be opened up as a peripheral port for wired MFi peripherals like a Webcam. Google uses a microUSB port on the Chromecast but this is primarily a power-supply and network-connection port. But they could, again, implement an “expansion module” that provides connectivity to a USB Webcam that is compliant to the USB Video and Audio device classes.
These situations could be answered through a subsequent hardware generation for each of the devices or, if the connections are software-addressable, a major-function firmware update could open up these connections for a camera.
As for application-level support, it may require that the extra camera connected to the Apple TV or Chromecast device be logically enumerated as another camera device by all smartphone apps that exploit the mobile phone’s cameras. The microphone in the camera and the TV’s speakers also would need to be enumerated as another communications-class audio device available to the communications apps. This kind of functionality could be implemented at operating-system level with very little work being asked of from third-party communications software developers.
User privacy can be assured through the same permissions-driven setup implemented in the platform’s app ecosystem that is implemented for access to the mobile device’s own camera and microphone. If users want to see this tightened, it could be feasible to require a separate permissions level for use of external cameras and audio-input devices. But users can simply physically disconnect the Webcam from the video peripheral device when they don’t intend to use it.
An alternative path for app-based connected-TV platforms
There is also an alternative path that smart-TV and set-top vendors could explore. Here, they could implement a universal network-based two-way video protocol that allows the smart TV or set-top device to serve as a large-screen video endpoint for the communications apps.
Similarly, a smart-TV / set-top applications platform could head down the path of using client-side applications that are focused for large-screen communications. This is in a similar vein to what was done for Skype by most smart-TV manufacturers, but the call-setup procedure can be simplified with the user operating their smartphone or tablet as the control surface for managing the call.
This could be invoked through techniques like DIAL (Discovery And Launch) that is used to permit mobile apps to discover large-screen “companion” apps on smart-TV or set-top devices in order to allow users to “throw” what they see on the mobile device to the large screen. As well, the connection to the user’s account could be managed through the use of a session-specific logical token established by the mobile device.
This concept can be taken further through the use of the TV screen as a display surface, typically for communications services’ messaging functions or to show incoming-call notifications.
What we still need to think of is to facilitate “dual-device” videocalling with the popular mobile platforms in order to simplify the task of establishing group videocalls using TVs and other large-screen display devices.
Google Motion Stills “before and after” demonstration output image – filmed from a car
Previously, I wrote an article about creating “visual wallpaper” for your electronic display including the creation of “cinemagraphs” which are still photos with a small amount of background animation. This was being made feasible by Apple’s Live Photos feature that came to iOS 9 where you could take a photo with a key still image but having a small amount of motion.
The Live Photos concept was restricted to the Apple platform and the social networks that hosted any Apple Live Photos typically either had to present a still or turn them in to animated GIFs.
Google have answered this problem through an editing and conversion tool called Google Motion Stills which allows you to shoehorn an Apple Live Photo to something that appeals as well as being able to export it as an animated GIF image. The software has integrated video stabilisation logic that comes in to play in keeping a still background but allowing certain parts to move. But it can also “smooth out” panoramas including images shot from a moving vehicle. The Motion Stills software also has the ability to optimise short videos to create video loops or cinemagraphs that appear infinite.
All this functionality is based on Google’s research through the creation of their Google Photos software where they could do things like create animations from photo bursts uploaded to that service. This also includes their effort with stabilising videos uploaded by users to YouTube where a lot of amateur video tends to be very shaky.
The software has to export to animated GIF images because this file format has become the defacto standard for short silent video clips and these GIF files can be used anywhere image files are used. Of course, the animations can be saved as QuickTime movies which would work with most other video-editing software, especially that which is in the Apple world.
…. only if we can get animated GIFs to work with DLNA-capable smart TVs
Send to Kindle
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.