Inside Robocog Labs
At HiringSolved, we’re exploring how our existing technology can be adapted into a real-world Augmented Reality product. We’re in a unique position because of our potential to bridge our capabilities in people aggregation with augmented reality.
We’re pushing through a brand new frontier in talent acquisition products. Our goal is to develop a device that seamlessly integrates with the natural networking experience. With AR, we can amplify any user’s sense of “who’s who”.
We’re jumping into uncharted territory to build something that has never existed before. Given our system’s capabilities, we are in a privileged place that gives us major leverage to become a huge player in the game of facial recognition ID.
Using our unmatched, massive structure of social, resume, and skill data, we have the power to identify people using facial recognition and overlay that information about them in real time.
That means recruiters will know literally at a glance what a person’s skills are so they can determine professional fit on the fly.
Imagine being at a crowded conference and watching everyone’s professional details populate before your eyes in mid-air.
Bad with names? In this world, the anxiety surrounding forgetfulness is a non-issue. You’ll always know who you’re talking to.
Building facial recognition software for a real-time information overlay is not without it’s challenges. We’re still very early into our ideation phase but already our engineers have brought up some concerns that surround the project.
One of them, Matt, is the skeptic on the team. After some initial research, Matt feels that to build a reliable system we would need assets from the database that we just don’t have at the moment.
For example, systems that accurately ID people based on facial features often rely on things like the distance between a person’s ear to the edge of their mouth, their eye and the angles associated with each. All of this is used as a kind of fingerprinting since they allow for enough variation that allows for highly accurate detection even when pulling from a large pool of people.
These systems usually rely on recording 180 degrees of a person’s face. That means a headshot won’t be enough for a reliable system. Often it would mean a user would have to voluntarily scan their face in a 180 degree photograph that our system could use for ID purposes. This is a major adoption issue that is unlikely to be resolved easily without some major incentive for people to do so.
Some technology companies have found workarounds to use related technology without requiring a user’s input. Facebook for example created a working model that uses multiple photos from a user’s profile to simulate the effect of a 180 degree picture.
Still – this solution wouldn’t be totally viable for us since we rarely have more than a headshot.
Another one of our engineers, Tyler, is more optimistic about what we’re capable of with our available resources. His idea involves something similar to Google’s reverse image search.
Just for kicks, Matt the skeptic did a crude test of Google’s existing technology and found that a shot of his male caucasian face returned a young Korean woman in the search results.
Unfazed by the demonstration, Tyler insists that we could build something that works by capturing a live image and desaturating it – essentially making it black and white imprint. Using features of the altered image, we will compare it with other images using pixel similarity and find the most similar image. This would make it very possible to develop something that works with what is already available.
It would use a different algorithm than the one Google uses. This one would focus more on known facial regularities. It would function like a Google reverse-image search except it would be hyper focused on what makes our faces so different. This would eliminate the need for a scan and be a major breakthrough for our system to leverage the power of our database.
Since we’re still in the ideation phase of this project, we’re not in a rush to acquire the most expensive technology on the market. We think we can keep things relatively inexpensive by leveraging low-cost products like Google cardboard.
Even if we got our hands on exciting dev kits like HTC Vive, Oculus Rift, or Microsoft’s HoloLens, we would still have a lot of the same initial problems to solve. Problems that can be fixed cheaply before we need to get any new toys. Since we all carry a very capable device in our pockets, there’s no urgency to invest in new tech quite yet.
That’s not to say we’re not watching those offerings very closely. We’re geeks who love to nerd out on the possibilities the new tech presents. It’s definitely an act of restraint for us to not expense a headset for everyone on the team. We’re testing our capacity for discipline for sure.
We salivate over the devices just as any nerd does. Procuring a device like the Microsoft HoloLens might be a great investment for us at this time. It’s a device whose focus is primarily on practical AR experiences which is more in our realm of ideation.
We’re also imagining the style of our device too. For people to adopt this device eventually – we need to keep our image-conscious consumers in mind. That’s why a high-powered top-of-the-line device like the HoloLens may not be the best choice simply based on how it looks with its large protruding brow.
Even though Shon (CEO) likes to talk about a device that operates like Iron Man’s helmet, we all know it’s impractical for a professional to be seen walking around a conference with a giant helmet on their head. Or even a Google Cardboard for that matter.
Addressing style is still quite a ways away for us – but we’re making a point to think about it early on.
For now we’re working to ensure our on-going aggregation is running smoothly and scaling well. It’s a monumental task, but it’s what makes the whole facial recognition device worth it. Our eureka moment will be the time when we link the data in this virtual world to the real one. All the upcoming features in our software like detecting willingness to leave a position, culture fit etc. are all pieces we expect can be used in real time.
Other AR Devices on the Market
Eventually, our device may provide a heads-up-display resembling the one Tesla uses with Google Glass to help their auto workers to get inventory information faster.
Another commercial product out there that has shown promise in AR is the Skully motorcycle helmet. It provides a display for motorcyclists that includes turn-by-turn GPS, MPH, and a 180-degree blind-spot camera for eyes in the back of your head.
Uber recently implemented facial recognition technology to deal with fraudulent driver issues they were facing. They partnered with a company called faceplusplus which relies on a user electing to provide their facial fingerprint.
Until the unlikely event where 180 degree scans of people’s faces are the norm and are made publicly available, we’re working on solutions in the interim. It could be a matter of aggregating more photos to simulate a 180-degree scan like facebook has done – or perhaps we’ll perfect an algorithm that can determine identity from head shots alone. Our exploration has just begun.
Want to hear more? Learn about AR in recruiting on our latest podcast!