Robust facial expression recognition using a state-based model of spatially-localised facial dynamics

Abstract
The paper proposes a new approach for the robust recognition of facial expressions from video sequences. The goal of the work presented, is to develop robust recognition techniques that will overcome some limitations of current techniques, such as their sensitivity to partial occlusion of the face, and noisy data. The paper investigates a representation of facial expressions which is based on a spatially-localised geometric facial model coupled to a state-based model of facial motion. The experiments show that the proposed facial expression recognition framework yields relatively little degradation in recognition rate, when faces are partially occluded, or under a variety of levels of noise introduced at the feature tracker level.

This publication has 20 references indexed in Scilit: