Abstract
The use of facial nonverbal behavior as an input has created a need for a robust emotion state model. Such a model would be used by the machine to determine actions appropriate to the exhibited behavior and machine state. Existing cognitive models of emotion provide a ready starting point for the development of a model for the human interface. However, these models do not account for the unique phenomenological aspects of the interface. These aspects are examined in detail from both a historical and modern perspective. The effects of the level of immersion into the virtual environment are also explored. Finally, solutions to this phenomena problem and further research paths are explored.

This publication has 10 references indexed in Scilit: