Pssst… we can write an original essay just for you.
Any subject. Any type of essay.
We’ll even meet a 3-hour deadline.
121 writers online
Human face recognition is a difficult problem in computer vision. Early artificial vision experiments tended to center around toy problems in which the world being observed was carefully controlled and constructed. Perhaps boxes in the shapes of regular polygons were identified, or simple objects such as a scissors were used. In most cases the background of the image was carefully controlled to provide excellent contrast between objects being analyzed and the surrounding world. Clearly face recognition does not fall into this category of problems.Face recognition is challenging because it is a real-world problem. The human face is a complex, natural object that tends not to have easily (automatically) identified edges and features. Because of this, it is difficult to develop a mathematical model of the face that can be used as prior knowledge when analyzing a particular image.
Applications of face recognition are widespread. Perhaps the most obvious is that of human computer interaction. One could make computers easier to use if when one simply sat down at a computer terminal, the computer could identify the user by name and automatically load personal preferences. This identification could even be useful in enhancing other technologies such as speech recognition, since if the computer can identify the individual who is speaking, the voice patterns being observed can be more accurately classified against the known individual’s voice.
Human face recognition technology could also have uses in the security domain. Recognition of the face could be one of several mechanisms employed to identify an individual. Face recognition as a security measure has the advantage that it can be done quickly, perhaps even in real time, and does not require extensive equipment to implement. It also does not pose a particular inconvenience to the subject being identified, as is the case in retinal scans. It has the disadvantage, however, that it is not a foolproof method of authentication, since human face appearance is subject to various sporadic changes on a day-to-day basis (shaving, hair style, acne, etc…), as well gradual changes over time (aging). Because of this, face recognition is perhaps best used as an augmentation for other identification techniques.A final domain in which face recognition techniques could be useful is search engine technologies. In combination with face detection systems, one could enable users to search for specific people in images. This could be done by either having the user provide an image of the person to be found, or simply providing the name of the person for well-known individuals. A specific application of this technology is criminal mug shot databases. This environment is perfectly suited for automated face recognition since all poses are standardized and lighting and scale are held constant. Clearly, this type of technology could extend online searches beyond the textual clues that are typically used when indexing information.
Face recognition is one of the most relevant applications of image analysis. It’s a true challenge to build an automated system which equals human ability to recognize faces. Although humans are quite good identifying known faces, we are not very skilled when we must deal with a large amount of unknown faces. The computers, with an almost limitless memory and computational speed, should overcome humans limitations.Face recognition remains as an unsolved problem and a demanded technology. A simple search with the phrase “face recognition” in the IEEE Digital Library throws 9422 results. 1332 articles in only one year 2009.
There are many different industry areas interested in what it could offer. Some examples include video surveillance, human-machine interaction, photo cameras, virtual reality or law enforcement. This multidisciplinary interest pushes the research and attracts interest from diverse disciplines. Therefore, it’s not a problem restricted to computer vision research. Face recognition is a relevant subject in pattern recognition, neural networks, computer graphics, image processing and psychology. In fact, the earliest works on this subject were made in the 1950’s in psychology . They came attached to other issues like face expression, interpretation of emotion or perception of gestures.Engineering started to show interest in face recognition in the 1960’s.
One of the first researches on this subject was Woodrow W. Bledsoe. In 1960, Bledsoe, along other researches, started Panoramic Research, Inc., in Palo Alto, California. The majority of the work done by this company involved AI-related contracts from the U.S. Department of Defense and various intelligence agencies . During 1964 and 1965, Bledsoe, along with Helen Chan and Charles Bisson, worked on using computers to recognize human faces [14, 15]. Because the funding of these researches was provided by an unnamed intelligence agency, little of the work was published. He continued later his researches at Stanford Research Institute. Bledsoe designed and implemented a semi-automatic system. Some face coordinates were selected by a human operator, and then computers used this information for recognition. He described most of the problems that even 50 years later Face Recognition still suffers – variations in illumination, head rotation, facial expression, aging. Researches on this matter still continue, trying to measure subjective face features as ear size or between-eye distance. For instance, this approach was used in Bell Laboratories by A. Jay Goldstein, Leon D. Harmon and Ann B. Lesk. They described a vector, containing 21 subjective features like ear protrusion, eyebrow weight or nose length, as the basis to recognize faces using pattern classification techniques. In 1973, Fischler and Elschanger tried to measure similar features automatically . Their algorithm used local template matching and a global measure of fit to find and measure facial features.
There were other approaches back on the 1970’s. Some tried to define a face as a set of geometric parameters and then perform some pattern recognition based on those parameters. But the first one that developed a fully automated face recognition system was Kenade in 1973. He designed and implemented a face recognition program. It ran in a computer system designed for this purpose. The algorithm extracted sixteen facial parameters automatically. In he’s work, Kenade compares this automated extraction to a human or manual extraction, showing only a small difference. He got a correct identification rate of 45-75%. He demonstrated that better results were obtained when irrelevant features were not used.I the 1980’s there were a diversity of approaches actively followed, most of them continuing with previous tendencies. Some works tried to improve the methods used measuring subjective features. For instance, Mark Nixon presented a geometric measurement for eye spacing . The template matching approach was improved with strategies such as “deformable templates” . This decade also brought new approaches. Some researchers build face recognition algorithms using artificial neural networks .
The first mention to eigenfaces in image processing, a technique that would become the dominant approach in following years, was made by L. Sirovich and M. Kirby in 1986 . Their methods were based on the Principal Component Analysis. Their goal was to represent an image in a lower dimension without losing much information, and then reconstructing it . Their work would be later the foundation of the proposal of many new face recognition algorithms.The 1990’s saw the broad recognition of the mentioned eigenface approach as the basis for the state of the art and the first industrial applications. In 1992 Mathew Turk and Alex Pentland of the MIT presented a work which used eigenfaces for recognition . Their algorithm was able to locate, track and classify a subject’s head. Since the 1990’s, face recognition area has received a lot of attention, with a noticeable increase in the number of publications. Many approaches have been taken which has lead to different algorithms. Some of the most relevant are PCA, ICA, LDA and their derivatives. Different approaches and algorithms will be discussed later in this work.
The most evident face features were used in the beginning of face recognition. It was a sensible approach to mimic human face recognition ability. There was an effort to try to measure the importance of certain intuitive features (mouth, eyes, cheeks) and geometric measures (between-eye distance , width-length ratio). Nowadays is still an relevant issue, mostly because discarding certain facial features or parts of a face can lead to a better performance . In other words, it’s crucial to decide which facial features contribute to a good recognition and which ones are no better than added noise.
However, the introduction of abstract mathematical tools like eigenfaces created another approach to face recognition. It was possible to compute the similarities between faces obviating those human-relevant features. This new point of view enabled a new abstraction level, leaving the anthropocentric approach behind.There are still some human-relevant features that are being taken into account. For example, skin color [9, 3] is an important feature for face detection. The location of certain features like mouth or eyes is also used to perform a normalization prior to the feature extraction step .To sum up, a designer can apply to the algorithms the knowledge that psychology, neurology or simple observation provide. On the other hand, it’s essential to perform abstractions and attack the problem from a pure mathematical or computational point of view.
Face Recognition is a term that includes several sub-problems. There are different classifications of these problems in the bibliography. Some of them will be explained on this section. Finally, a general or unified classification will be proposed.
The input of a face recognition system is always an image or video stream. The output is an identification or verification of the subject or subjects that appear in the image or video. Some approaches  define a face recognition system as a three step process – see Figure 1.1. From this point of view, the Face Detection and Feature Extraction phases could run simultaneously.Figure 1.1: A generic face recognition system.Face detection is defined as the process of extracting faces from scenes. So, the system positively identifies a certain image region as a face. This procedure has many applications like face tracking, pose estimation or compression. The next step -feature extraction- involves obtaining relevant facial features from the data. These features could be certain face regions, variations, angles or measures, which can be human relevant (e.g. eyes spacing) or not. This phase has other applications like facial feature tracking or emotion recognition. Finally, the system does recognize the face. In an identification task, the system would report an identity from a database. This phase involves a comparison method, a classification algorithm and an accuracy measure. This phase uses methods common to many other areas which also do some classification process -sound engineering, data mining et al.These phases can be merged, or new ones could be added. Therefore, we could find many different engineering approaches to a face recognition problem. Face detection and recognition could be performed in tandem, or proceed to an expression analysis before normalizing the face .
Face Detection is a concept that includes many sub-problems. Some systems detect and locate faces at the same time, others first perform a detection routine and then, if positive, they try to locate the face. Then, some tracking algorithms may be needed – see Figure 1.2.Figure 1.2: Face detection processes.Face detection algorithms usually share common steps. Firstly, some data dimension reduction is done, in order to achieve a admissible response time. Some preprocessing could also be done to adapt the input image to the algorithm prerequisites. Then, some algorithms analyze the image as it is, and some others try to extract certain relevant facial regions. The next phase usually involves extracting facial features or measurements. These will then be weighted, evaluated or compared to decide if there is a face and where is it. Finally, some algorithms have a learning routine and they include new data to their models.Face detection is, therefore, a two class problem where we have to decide if there is a face or not in a picture. This approach can be seen as a simplified face recognition problem. Face recognition has to classify a given face, and there are as many classes as candidates. Consequently, many face detection methods are very similar to face recognition algorithms. Or put another way, techniques used in face detection are often used in face recognition.
There are many feature extraction algorithms. They will be discussed later on this paper. Most of them are used in other areas than face recognition. Researchers in face recognition have used, modified and adapted many algorithms and methods to their purpose. For example, PCA was invented by Karl Pearson in 1901 , but proposed for pattern recognition 64 years later . Finally, it was applied to face representation and recognition in the early 90’s. See table 1.2 for a list of some feature extraction algorithms used in face recognition
Feature selection algorithm’s aim is to select a subset of the extracted features that cause the smallest classification error. The importance of this error is what makes feature selection dependent to the classification method used. The most straightforward approach to this problem would be to examine every possible subset and choose the one that fulfills the criterion function. However, this can become an unaffordable task in terms of computational time. Some effective approaches to this problem are based on algorithms like branch and bound algorithms. See table 1.3 for selection methods proposed in .
Once the features are extracted and selected, the next step is to classify the image. Appearance-based face recognition algorithms use a wide variety of classification methods. Sometimes two or more classifiers are combined to achieve better results. On the other hand, most model-based algorithms match the samples with the model or template. Then, a learning method is can be used to improve the algorithm. One way or another, classifiers have a big impact in face recognition. Classification methods are used in many areas like data mining, finance, signal decoding, voice recognition, natural language processing or medicine. Therefore, there is many bibliography regarding this subject. Here classifiers will be addressed from a general pattern recognition point of view.Classification algorithms usually involve some learning – supervised, unsupervised or semi-supervised. Unsupervised learning is the most difficult approach, as there are no tagged examples. However, many face recognition applications include a tagged set of subjects. Consequently, most face recognition systems implement supervised learning methods. There are also cases where the labeled data set is small. Sometimes, the acquisition of new tagged samples can be infeasible. Therefore, semi-supervised learning is required.
This work has presented the face recognition area, explaining different approaches, methods, tools and algorithms used since the 60’s. Some algorithms are better, some are less accurate, some of the are more versatile and others are too computationally costly. Despite this variety, face recognition faces some issues inherent to the problem definition, environmental conditions and hardware constraints. Some specific face detection problems are explained in previous chapter. In fact, some of these issues are common to other face recognition related subjects. Nevertheless, those and some more will be de- tailed in this section.Many algorithms rely on color information to recognize faces. Features are extracted from color images, although some of them may be gray-scale. The color that we perceive from a given surface depends not only on the surface’s nature, but also on the light upon it. In fact, color derives from the perception of our light receptors of the spectrum of light -distribution of light energy versus wavelength. There can be relevant illumination variations on images taken under uncontrolled environment. That said, the chromacity is an essential factor in face recognition. The intensity of the color in a pixel can vary greatly depending on the lighting conditions.Is not only the sole value of the pixels what varies with light changes. The relation or variations between pixels may also vary. As many feature extraction methods relay on color/intensity variability measures between pixels to obtain relevant data, they show an important dependency on lighting changes. Keep in mind that, not only light sources can vary, but also light intensities may increase or decrease, new light sources added. Entire face regions be obscured or in shadow, and also feature extraction can become impossible because of solarization. The big problem is that two faces of the same subject but with illumination variations may show more differences be- tween them than compared to another subject.
Summing up, illumination is one of the big challenges of automated face recognition systems. Thus, there is much literature on the subject. However, it has been demonstrated that humans can generalize representations of a face under radically different illumination conditions, although human recognition of faces is sensitive to illumination direction .
We provide you with original essay samples, perfect formatting and styling
To export a reference to this article please select a referencing style below:
Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you.
Attention! This essay is not unique. You can get a 100% Plagiarism-FREE one in 30 sec
Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.
Your essay sample has been sent.
Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.Order now
Are you interested in getting a customized paper?Check it out!