Integration of motion control techniques for virtual human and avatar real-time animation

Abstract
Real-time animation of virtual humans requires a dedicated architecture for the integration of different motion control techniques running into so-called actions. In this paper we describe a software architecture called AGENTlib for the management of action combination. Considered actions exploit various techniques from keyframe sequence playback to Inverse Kinematics and motion capture. Two major requirements have to be enforced from the end user viewpoint. First that multiple motion controllers can control simultaneously some parts or whole of the virtual human. Second, that successive actions result in a smooth motion flow. We propose a software architecture, called AGENTlib, for the real-time integration of motions produced by different movement generators. These motion generators are encapsulated into so-called actions of various complexity. At the lower level the motion is controlled with techniques like Inverse Kinematics, Dynamics, Sensor-based control, Keyframe interpolation, and Motion Capture. More complex actions can be the result of several motion control paradigms organized as a functional model (e.g. grasping, walking). Defining the boundary between an action and a behavior is difficult, if not impossible. We propose the following definition to characterize an action: the goal of an action is clearly identified from the local knowledge at its activation. Whenever this requirement is ensured, we state throughout this paper that an action is allowed to coordinate the sequential scheduling of one to several simpler actions with a finite state automata. Our motivation is to release the application creator from the management of the low level finite state automata action and also from the monitoring of parameters based on the goal to reach. The job of the action provider is to craft these automata with precise timing and continuity adjustments.

This publication has 6 references indexed in Scilit: