The party season is on us and some of us can use various electronic communications technologies to do something we would otherwise regret after we have hand a few too many through the party. This could be to use social media to share an embarrassing picture of someone while they were drunk; send a stupid email to someone we know or knew or make that call to “the wrong person”.
Some software developers have worked on various technologies to “put the brakes” on this kind of rash activity such as Google’s effort to implement a problem-solving CAPTCHA when we send an email late at night, the development of iOS apps that mask contacts that we are at risk of contacting when drunk. But Facebook have taken this further by implementing deep-machine-learning in their “slow-down” algorithms.
Here, they use the facial-recognition algorithms that they built for their pioneering image-tagging feature and used this with a mobile device’s camera to identify if the user looks drunk. This is also used with other machine-learning to assess the context of posts and links you intend to post where you are tagging a person in the post so you aren’t at risk of sharing something you wouldn’t otherwise share. Here, it would work with Facebook client software which has access to the Webcam on your computer or the integrated front camera on your mobile device but may not work with web-based Facebook sessions where the browser doesn’t have programmatic access to the Webcam.
This deep-learning could also be used as part of “client-side” software to work as a way of avoiding drunk emailing or other risk-taking activities that you could engage at the computer. As I have seen before, a lot of the advanced machine-learning research doesn’t just belong to a particular company to exploit in its products but could be licensed out to other software developers to refine in to their programs.