Abstract
We show that we can effectively and automatically fit a complex facial animation model to uncalibrated image sequences. Our approach is based on model-driven bundle adjustment followed by least squares fitting. It takes advantage of three complementary sources of information: stereo data, silhouette edges and 2D feature points. In this way, complete head models can be acquired with a cheap and entirely passive sensor such as an ordinary video camera. They can then be fed to existing animation software to produce synthetic sequences.

This publication has 10 references indexed in Scilit: