Tag: machine vision

With a special film, a smartphone can detect HIV and other diseases

Article

Smartphone accessory puts HIV diagnosis in doctors’ pockets | Engadget

My Comments

Samsung Galaxy Note 2 smartphone

Smartphones will soon come in to play with more pathology tests

Increasingly, the smartphone’s integrated camera is being used as part of machine vision for medical applications. These technologies are based on the concept of litmus paper which changes between different colours depending on whether a chemical solution is acidic or alkaline, something a few of you may have seen demonstrated in the high-school chemistry class. The first application I highlighted was to use a urinalysis “control stick” along with the smartphone’s camera and a special app to identify what is in urine that is passed. Now this technology is being brought to blood analysis mainly to test for HIV, E. coli and staph in a patient’s blood. Here, a very small amount of the patient’s blood needs to be put on the film and it be examined by the smartphone’s camera with that phone running a special app. It avoids the need to send the sample to a laboratory to be analysed, because the film turns a different colour in the presence of an antibody makeup representing a particular disease. This is pitched at third-world communities or rural communities where it is cost-prohibitive for them to have a pathology laboratory located in a short-enough distance to provide quick-enough turnaround for test results for these diseases. There is even the ability where the pictures can be sent out to somewhere else for better expert opinion if the testing centre doesn’t have someone who is skilled enough. There are plans to even adapt this technology to detect more illnesses or check for pathogens in food and water. What I also see of this is it could open the path to “on-the-spot” screening-type pathology tests that dodge the need for sending out samples to a lab which I see as a boon for rural communities or telemedicine applications.

Technologies being used to slow-down regrettable communications activity

Article

How Machine Learning Could Stop Drunk Posts On Facebook | Lifehacker

My Comments

Samsung Galaxy Note 2 smartphone

The smartphone can end up as something that can cause trouble later on after a party

The party season is on us and some of us can use various electronic communications technologies to do something we would otherwise regret after we have hand a few too many through the party. This could be to use social media to share an embarrassing picture of someone while they were drunk; send a stupid email to someone we know or knew or make that call to “the wrong person”.

Some software developers have worked on various technologies to “put the brakes” on this kind of rash activity such as Google’s effort to implement a problem-solving CAPTCHA when we send an email late at night, the development of iOS apps that mask contacts that we are at risk of contacting when drunk. But Facebook have taken this further by implementing deep-machine-learning in their “slow-down” algorithms.

Here, they use the facial-recognition algorithms that they built for their pioneering image-tagging feature and used this with a mobile device’s camera to identify if the user looks drunk. This is also used with other machine-learning to assess the context of posts and links you intend to post where you are tagging a person in the post so you aren’t at risk of sharing something you wouldn’t otherwise share. Here, it would work with Facebook client software which has access to the Webcam on your computer or the integrated front camera on your mobile device but may not work with web-based Facebook sessions where the browser doesn’t have programmatic access to the Webcam.

This deep-learning could also be used as part of “client-side” software to work as a way of avoiding drunk emailing or other risk-taking activities that you could engage at the computer. As I have seen before, a lot of the advanced machine-learning research doesn’t just belong to a particular company to exploit in its products but could be licensed out to other software developers to refine in to their programs.

NEC implements your smartphone’s camera to detect knock-off goods

Article

NEC smartphone tech can spot counterfeit goods | PC World

NEC wants you to spot counterfeits using your phone’s camera | Engadget

My Comments

Samsung Galaxy Note 2 smartphone

The camera on these smartphones could work towards identifying whether that handbag at the flea market is a knock-off

Previously, I had covered some applications where commodity-priced camera modules have been used for machine vision. These applications, which were mostly based around the cameras that your typical smartphone or tablet are equipped with, were more than just reading and interpreting a barcode of some sort in order to look up data. Rather they were about interpreting a control stick typically soaked in liquid that is used as part of urinalysis or to observe the character of blood vessels on one’s face to read one’s pulse.

But NEC is implementing machine vision using one’s smartphone to determine whether an object like a luxury handbag or a pair of name-brand sneakers is a “knock-off” or not. Here, they use the camera with a macro-lens attachment to identify the “fingerprint” that the metal or plastic material’s grain yields through its manufacture. This typically applies to items made of these materials or where an item is equipped with one or more fasteners, trim items or other fittings made of these materials.

NEC wants to see this technology not just apply to verifying the authenticity of new goods but also be used to allow the manufacturers to check that repair and maintenance of goods is “up to snuff” or follow the distribution and retail chain of these goods.

The manufacturers have to “register” these items in order to create the “reference database” that relates to their goods. As well, users would have to use a macro-equipped device such as a smartphone equipped with a macro-converter attachment or a “clip-on” camera with this kind of lens. They will offer the lenses as a 3D-printed attachment to suit most of the popular handsets and tablets. It could also open up a market for small-form Webcams and similar cameras that come with macro lenses or multi-function lenses.

A missing part of the question would be whether the technology would apply to goods made out of soft materials like cloth or leather. This would take it further with identifying clothes, footwear and “soft-material” luggage or checking whether the material used to upholster furniture reflects what the manufacturer or customer wants for the job.

Your smartphone’s camera can take your pulse courtesy of Fujitsu

Article

Fujitsu tech takes your pulse with your camera phone – popular science, mobile applications, mobile, Fujitsu – PC World Australia

My Comments

The platform smartphone or tablet is starting to play an increasingly important role on personal health and wellbeing without the need to be dependent on extra peripherals. It is becoming increasingly relevant for these devices so you can keep an electronic record of observations or easily send the data to a doctor or clinic via email or cloud data service. This would lead to these devices becoming part of various home-based healthcare setups like management of chronic illnesses or catering to the idea of “ageing at home” where older people can stay at home independently or under the care of their relatives, friends or paid carers.

Previously I reported on the use of a smartphone camera and app that implements machine vision for “reading” certain urinalysis sticks, avoiding the need to check against confusing charts. I even put forward the idea of using similar “fluid-analysis” sticks and a smartphone app to check other liquids like drinks for “spiking” or “loading” or to check the pH level in a swimming pool.

Now Fujitsu has developed software code that makes a small digital camera like that installed in a smartphone or tablet as machine vision for taking someone’s pulse.This may be seen to displace the medical skill where you “pinch” the patient’s wrist near their hand and count the beats that you feel for a minute measured by a stopwatch or watch with second hand.

This concept works on the fact that the brightness of one’s face changes slightly as their heart beats and uses the presence of green light to look for haemoglobin which is part of the red (just oxygenated) blood cells. The procedure requires 5 seconds versus a minute with the orthodox method and the software can assess when patient is still for improved accuracy.

Fujitsu hopes to commercialise the technology in 12 months but there are questions on whether they will implement it in their own equipment or license it to other developers. For it to be popular, they would have to license the algorithms to other software developers to integrate in to their projects and / or release a finished software product to the platform app stores for people to use on their devices.

They also see this technology as facilitating unobtrusive measurement of one’s pulse using the camera on a PC, smartphone, smart TV, or tablet this being part of long-term observational-healthcare situations like chronic illness management.

What I see of this is the ability to use the cost-effective and ubiquitous hardware i.e. the multi-functional smartphone, tablet or Ultrabook to work as part of remote health care and allied applications with minimum need to use extra peripherals.

Using a smartphone’s camera as machine vision for better fluid analysis using analysis strips

Article

Urine sample app lets users detect diseases with iPhones | Cutting Edge – CNET News

My Comments

I was amazed about the concept of having a smartphone’s camera along with a special app “read” a urine analysis strip to provide a better analysis of diseases and other issues that can be determined through urinalysis. This can allow for the in-home diagnosis that these strips provide but allow for improved accuracy in these tests. The end user doesn’t have to add any peripherals to the smartphone or tablet for these strips to work with the software nor does the device come in to contact with the fluid in question. Rather they use the camera integrated in this device to provide the “machine vision” for the software to do the analysis.

This app could also allow for further analysis for other illnesses and conditions by the developer programming it further for these different conditions.

But I would also like to see the concept taken further beyond health tests. For example, the use of a fluid analysis strip along with an app that “reads” the strip could come in handy as a tool to help with safe partying. Here, a fluid analysis strip could be placed in a hot or cold drink, then read by an analysis app which uses the smartphone camera for machine vision to determine if the drink has been spiked with drugs. The analysis app could also determine if a drink has been “loaded” with too much alcohol, by referring to a device-local or online database of known “alcohol ratios” for many drinks including the mixed drinks and cocktails.

Similar "analysis-stick” chromatography that uses a Webcam, smartphone camera or similar machine vision, can be taken further for consumer and applications like checking the condition of engine and battery fluids in a vehicle which can betray the truth about the condition of that used car he is trying to sell, or checking the condition of the water in a swimming pool so you can keep that pool in order.