Task-Level Object Grasping for Simulated Agents

Abstract
Simulating a human figure performing a manual task requires that the agent interact with objects in the environment in a realistic manner. Graphic or programming interfaces to control human figure animation, however, do not allow the animator to instruct the system with concise “high-level” commands. Instructions coming from a high-level planner cannot be directly given to a synthetic agent because they do not specify such details as which end-effector to use or where on the object to grasp. Because current animation systems require joint angle displacement descriptions of motion—even for motions that incorporate upwards of 15 joints—an efficient connection between high-level specifications and low-level hand joint motion is required. In this paper we describe a system that directs task-level, general-purpose, object grasping for a simulated human agent. The Object-Specific Reasoner (OSR) is a reasoning module that uses knowledge of the object of the underspecified action to generate values for missing parameters. The Grasp Behavior manages simultaneous motions of the joints in the hand, wrist, and arm, and provides a programmer with a high-level description of the desired action. When composed hierarchically, the OSR and the Grasp behavior interpret task-level commands and direct specific motions to the animation system. These modules are implemented as part of the Jock system at the University of Pennsylvania.

This publication has 1 reference indexed in Scilit: