Learning to grasp using visual information
- 23 December 2002
- proceedings article
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 3, 2470-2476
- https://doi.org/10.1109/robot.1996.506534
Abstract
A scheme for learning to grasp objects using visual information is presented. A system is considered that coordinates a parallel-jaw gripper (hand) and a camera (eye). Given an object, and considering its geometry, the system chooses grasping points, and performs the grasp. The system learns while performing grasping trials. For each grasp we store location parameters that code the locations of the grasping points, quality parameters that are relevant features for the assessment of grasp quality, and the grade. We learn two separate subproblems: (1) to choose the grasping points, and (2) to predict the quality of a given grasp. The location parameters are used to locate grasping points on new target objects. We consider a function from the quality parameters to the grade, learn the function from examples, and later use it to estimate grasp quality. In this way grasp quality for novel situations can be generalized and estimated.Keywords
This publication has 6 references indexed in Scilit:
- Automatic discovery of robotic grasp configurationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Robotic sensorimotor learning in continuous domainsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- A three-finger gripper for manipulation in unstructured environmentsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- CSL: a cost-sensitive learning system for sensing and grasping objectsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Computational modelling of hand-eye coordinationPhilosophical Transactions Of The Royal Society B-Biological Sciences, 1992
- Constructing Force- Closure GraspsThe International Journal of Robotics Research, 1988