Visual guided grasping of aggregates using self-valuing learning
- 25 June 2003
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 4, 3912-3917
- https://doi.org/10.1109/robot.2002.1014336
Abstract
We present a self-valuing learning technique which is capable of learning how to grasp unfamiliar objects and generalize the learned abilities. The learning system consists of two learners which distinguish between local and global grasping criteria. The local criteria are not object specific while the global criteria cover physical properties of each object. The system is self-valuing, i.e. it rates its actions by evaluating sensory information and the usage of image processing techniques. An experimental setup consisting of a PUMA-260 manipulator, equipped with a hand-camera and a force/torque sensor, was used to test this scheme. The system has shown the ability to grasp a wide range of objects and to apply previously learned knowledge to new objects.Keywords
This publication has 7 references indexed in Scilit:
- Computing parallel-jaw gripsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Learning to grasp using visual informationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Robotic grasping and contact: a reviewPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2000
- Published by SAGE Publications ,1997
- Alignment of trees — an alternative to tree editTheoretical Computer Science, 1995
- The Tree-to-Tree Correction ProblemJournal of the ACM, 1979
- A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)Journal of Dynamic Systems, Measurement, and Control, 1975