Combining vision based information and partial geometric models in automatic grasping
- 4 December 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The problem of making sensing and acting techniques cooperate in order to achieve a given manipulation task in a partially structured environment is treated in the context of automatic grasping by guiding the decisional process using a combination of partial geometric models and a vision data. The geometric models represent the known information concerning the robot workspace and the object to be grasped. The vision-based information is collected at execution time using a 2D camera and a 3D vision sensor, both located on the robot end effector. This means that robot motions and sensing operations have to be combined for the purpose of both acquiring the missing information and guiding the grasping movements. This is achieved by applying three processing phases respectively aimed at selecting a viewpoint avoiding occlusions, modeling the local environment of the object to be grasped, and determining the grasping parameters.Keywords
This publication has 5 references indexed in Scilit:
- Task-level planning of pick-and-place robot motionsComputer, 1989
- Planning Robot Motions in the SHARP SystemPublished by Springer Nature ,1988
- Determining Grasp Configurations using Photometric Stereo and the PRISM Binocular Stereo SystemThe International Journal of Robotics Research, 1986
- Stable Matching Between a Hand Structure and an Object SilhouettePublished by Institute of Electrical and Electronics Engineers (IEEE) ,1982
- Automatic Planning of Manipulator Transfer MovementsIEEE Transactions on Systems, Man, and Cybernetics, 1981