Abstract
This paper describes an algorithm for detecting human faces and subsequently localizing the eyes, nose, and mouth. First, we locate the face based on color and shape information. To this effect, a supervised pixel-based color classifier is used to mark all pixels which are within a prespecified distance of "skin color". This color-classification map is then subject to smoothing employing either morphological operations or filtering using a Gibbs random field model. The eigenvalues and eigenvectors computed from the spatial covariance matrix are utilized to fit an ellipse to the skin region under analysis. The Hausdorff distance is employed as a means for comparison, yielding a measure of proximity between the shape of the region and the ellipse model. Then, we introduce symmetry-based cost functions to locate the center of the eyes, tip of nose, and center of mouth within the facial segmentation mask. The cost functions are designed to take advantage of the inherent symmetries associated with facial patterns. We demonstrate the performance of our algorithm on a variety of images.

This publication has 13 references indexed in Scilit: