On Constructing Facial Similarity Maps

Abstract
Automatically determining facial similarity is a difficult and open question in computer vision. The problem is complicated both because it is unclear what facial features humans use to determine facial similarity and because facial similarity is subjective in nature: similarity judgements change from person to person. In this work we suggest a system which places facial similarity on a solid computational footing. First we describe methods for acquiring facial similarity ratings from humans in an efficient manner. Next we show how to create feature vector representations for each face by extracted patches around facial key-points. Finally we show how to use the acquired similarity ratings to learn functional mapping which project facial-feature vectors into face spaces which correspond to our notions of facial similarity. We use different collections of images to both create and validate the face spaces including: perceptual similarity data obtained from humans, morphed faces between two different individuals, and the CMU PIE collection which contains images of the same individual under different lighting conditions. We demonstrate that using our methods we can effectively create face spaces which correspond to human notions of facial similarity.

This publication has 4 references indexed in Scilit: