I believe this could be the first step to being able to do authentication using facial recognition based on people’s mannerisms. The ability of a human to recognize a face is remarkable. There is literally an entire region of the brain setup to handle this process.
Similar to a human brain, could a machine handle the complexity involved in facial recognition using exponential technologies like machine learning? The simple answer is, yes it can.
In this article, Human Face Recognition Found in Neural Network Based on Monkey Brains, you see the truth unfolding before your eyes. A network is learning (not being programmed) but is actually learning how to recognize faces.
Interestingly, this kind of progress can get buried in particular elements presented in these kind of articles:
Farzmahdi and co train the layers in the system using different image databases. For example, one of the datasets contain 740 face images consisting of 37 different views of 20 people. Another dataset contains images of 90 people taken from 37 different viewing angles. They also have a number of datasets for evaluating specific properties of the neural net.
Having trained the neural network, Farzmahdi and co put it through its paces. In particular, they test whether the network demonstrates known human behaviours when recognising faces.
You have to be kidding me – ‘neural net’ and ‘train the layers of the system’! It sounds like you are teaching a child in infancy.
Well, this is exactly what is happening.
What is a ‘neural net’ from a machine learning perspective? And what does it mean to train layers of a system? Check out this article for more information.
Inspired by our understanding of how the brain learns, neural networks use learning algorithms to recognize speech patterns, object recognition, image retrieval and the ability to recommend products that a user will like.
Neural Networks are gradually taking over from simpler Machine Learning methods.
They are already at the heart of a new generation of speech recognition devices and they are beginning to outperform earlier systems for recognizing objects in images.
If you are interested in further studies of Machine Learning, Coursera offers an online course specifically about neural networks.
MIT’s Technology Review showcases a great example of how machine-learning algorithms can create three-dimensional representations of faces. These faces were reconstructed from images that capture the identifying features of the individual’s face and also, their mannerisms and expressions.
The project was started by the University of Washington who asked the question, “What makes Tom Hanks look like Tom Hanks?”
The software maps the movements and gestures of one person’s face and transposes it onto another’s. The team hope to use this new technology along with virtual-reality technology to build three dimensional reconstructions of deceased persons.
I don’t necessarily believe that ‘selfie’ or ‘mannerism’ authentication is the underpinnings of the future of IT Security soundness. However, multi-layer or two factor authentication has proven to be a very helpful deterrent for attackers since adding a layer on top of traditionally weak passwords is necessary.
Identifying people by their unique characteristics is not new
Records from the 221-206 BC Qin Dynasty, China, show details about how handprints were used as evidence during burglary investigations. Clay seals bearing friction ridge impressions were used during both the Qin and Han Dynasties (221 BC – 220 AD).
Fingerprints identification is also over 100 years old. Francis Galton, although not the first person to use fingerprints to identify suspects, really pushed the technology forward. In 1892, he collected a large sample of prints through his Anthropological laboratories, eventually amassing over 8,000 sets.
His study of these prints provided the foundation for meaningful comparison of different prints, and he was able to construct a statistical proof of the uniqueness.
Galton also provided the first workable fingerprint classification system. This was later adapted by E. R. Henry for practical use in police forces and other bureaucratic settings. Most of all, Galton’s extensive popular advocacy of the use of prints helped to convince a sceptical public that they could be used reliably for identification.
Multi-factor Authentication vs Multi-layer authentication
Multi-factor authentication requires the use of solutions from two or more of categories of factors to successfully identify someone. It requires the user to provide separate pieces of evidence so access maybe be granted.
These may be knowledge (something they know); possession (something they have), and inherence (something they are). Typically this may take the form of a password or a string of characters; secret questions; a security token or key; or biometric such as fingerprints, retina scanners or voice recognition.
Multi-layer or two factor authentication uses multiple solutions from varying categories at different points in the process. A simple explanation is automated when withdrawing funds from an Automatic Teller. Verification is provided by the user’s possession of a card and their PIN. Without these two the transaction can’t take place.
We often come across multi-layer or two factor authentication when online companies ask us to authenticate who we are when we first visit and if we want to go into administrative pages. On both occasions, however, the user is often required to present an identical password.
While both of these methods are legitimate and useful they have their drawbacks.
I love when sites like google or Microsoft’s OneDrive ask me for another form of authentication in order to verify trust. I usually trigger these events because I use a variety of devices and I am switching geographic regions often. It really tweaks these products and I am willing to live with the inconvenience because I know that it is very hard for hackers to handle Multi-factor authentication.
According to NEC, the biometrics of the face is the key to this kind of security. As the Technology Review demonstrated the human face plays an important role in our social interaction. It communicates people’s identity on many different levels.
In all areas, including law enforcement, border control and governmental access to sensitive materials, face recognition technology offers great potential.
As well, it’s a ‘non-contact’ process means that an individual can be recognized from a reasonable distance and, at times, without their knowledge.
Whereas other forms of identity-based recognition such as voice printing, iris scanning, finger printing, EKG (heart signature), behavioral patterns and even ear patterns, requires a certain level of cooperation and contact.
With face recognition technology, the identifiers do not have to have any interaction with the person being identified. This opens up opportunities as law enforcement departments scan, identify and store face-images.
In fact, face recognition technology offers:
- Fast & accurate face recognition
- Multiple-matching face detection
- Combination of eye-zone extraction and facial recognition
- Short processing time, high recognition rate
- Recognition regardless of vantage point and facial changes (glasses, beard, and expression)
- Extraction of similar facial areas
- Identification and authentication based on individual facial features
- Easy adaptation to existing IT systems
- Flexible integration into many types of video monitoring systems
Machine Learning, Deep Learning and Face Recognition Technology
NEC’s face recognition technology utilizes the Generalized Matching Face Detection Method (GMFD). This revolutionary science provides high speed and high accuracy for facial detection and facial features extraction. It searches and selects face area candidates after the generation of potential eye pairs.
Utilizing a Generalized Learning Vector Quantization (GLVQ) algorithm, GLVQ is based on a neural network and is not easily fooled by attempts to conceal identity via the usage of caps, hats, sunglasses, as well as, smiling and blinking eyes. The minimization of the changes guarantees the overall face recognition accuracy.
As well, Perturbation Space Method (PSM) converts two-dimensional images, such as photographs, into three-dimensions in a process is called “Morphing”. The three-dimensional representations of the head are then rotated in both the left-to-right and up-and-down directions.
The technology then applies different combinations of light and shadow across the face. This enhances the chances of a query “face-print” for matching against its true mate from the database.
How does this technology affect users?
Android and iPhone users are already been exposed to Finger Print recognition on devices. Already, swipe and touch based technology through sensors are being built into the phone to support these new methods and are allowing immediate identification.
Being able to identify an individual through Voice Recognition is another valid method. However, if it is not matched with a geographical location identifier, or another method of authentication, the sensors could be corrupted by something like background noise, for example.
While finger prints can’t be changed they are open to corruption if someone gets a facsimile of your prints. However, passwords can be changed and altered at whim. This is a statistically important point that needs to be addressed as biometric (finger) authentication makes it very hard to work across entire populations because there are people, for example, whose finger prints can’t be read. People missing fingers, arms, like veterans, for example.
It is interesting to note that apart from face recognition technology, these other methods have their flaws.
Iris Security has a 1 in 2 million chance of a false positive while finger printing has a 1 in 200k chance of a false positive. This makes IRIS security 5-7 times for secure.
IRIS scanning is harder and more cumbersome and less convenient. It also had a “creepy” factor. Previously, people did not like having an Iris Scanner held up to their eyes. However, a ‘Selfie’ is basically very commonplace now so, ironically, Iris scanning may be more acceptable given more time.
The technology to identify an individual based on gesture and movement is also approaching. This will be identified by the way you use your phone, search Google and type text messages, just to name a few.
These new steps offer both problems and opportunities for hackers.
Ethics versus Context
The issue here is less relevant as to whether or not facial recognition is the “second coming” of Security. The human brain has been evolving over thousands and thousands of years. Exponential Technologies are evolving so fast that they are skipping over the steps in the evolutionary process at a never seen before rate.
However, time is needed to evolve and estimate if what you are creating is correct and right for society? What are the lateral implications of the toys (machine learning toys) you are making? Time gives us space to evaluate deeper societal implications that are less popular in regards to ethics and governance.
A machine that can recognize a face more accurately that a human being is coming. The power to do so will reside right on your own phone. In theory, if the phone knows who its owner is then this solves mobility security right at the source.
This is the good side of the story, but what about bad? What about the ‘right to be forgotten’? What about anonymity? What about this technology in the wrong hands?
Consider now the plusses and negatives of Machine Learning. As this age approaches we will need strong leaders to help usher this age into existence.
Bill Murphy is an IT Security and IT leadership Expert