Abstract
We explore the application of facial tracking to automated re-animation. To this end, it is necessary to recover both head-pose and facial expression from the facial movement of a performer. However, both effects are coupled. This is a serious problem, which previous studies haven't fully considered. The solution to this interaction problem proposed here is to solve explicitly, at each timestep, for pose and expression variables. In principle this is a nonlinear inverse problem. However, appropriate parameterisation of pose in terms of affine transformations with parallax, and of expression in terms of key-frames, reduces the problem to a bilinear one. This can then be solved directly by Singular Value Decomposition. Thus actor-driven animation has ben implemented in real-time, at video field-rate, using two Indy desktop workstations.

This publication has 10 references indexed in Scilit: