Using head movement to recognize activity

Abstract
This paper presents a methodology for automatically identifying human actions in either the frontal or the lateral view. By tracking the movement of the head of the subject over successive frames of a monocular grayscale image sequence, we recognize 12 different actions. The head is segmented automatically in each frame, and the feature vectors extracted. Input sequences captured from a fixed CCD camera are matched against stored models of actions. The system uses the nearest neighbor classifier to identify the test action.

This publication has 8 references indexed in Scilit: