Articulated model based people tracking using motion models
- 26 June 2003
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
This paper focuses on acquisition of human motion data such as joint angles and velocity for applications of virtual reality, using both an articulated body model and a motion model in the CONDENSATION framework. Firstly, we learn a motion model represented by Gaussian distributions, and explore motion constraints by considering the dependency of motion parameters and represent them as conditional distributions. Both are integrated into the dynamic model to concentrate factored sampling in the areas of state-space with most posterior information. To measure the observing density with accuracy and robustness, a PEF (pose evaluation function) modeled with a radial term is proposed. We also address the issue of automatic acquisition of initial model posture and recovery from severe failures. A large number of experiments on several persons demonstrate that our approach works well.Keywords
This publication has 6 references indexed in Scilit:
- Determination of 3D human body postures from a single viewPublished by Elsevier ,2004
- 3D Articulated Models and Multiview Tracking with Physical ForcesComputer Vision and Image Understanding, 2001
- Tracking Persons in Monocular Image SequencesComputer Vision and Image Understanding, 1999
- Human Motion Analysis: A ReviewComputer Vision and Image Understanding, 1999
- The Visual Analysis of Human Movement: A SurveyComputer Vision and Image Understanding, 1999
- CONDENSATION—Conditional Density Propagation for Visual TrackingInternational Journal of Computer Vision, 1998