Model-based visual feedback control for a hand-eye coordinated robotic system

Abstract
The integration of a single camera into a robotic system to control the relative position and orientation between the robot's end-effector and a moving part in real time is discussed. Only monocular vision techniques are considered because of current limitations in the speed of computer vision analysis. The approach uses geometric models of both the part and the camera, as well as the extracted image features, to generate the appropriate robot control signals for tracking. Part and camera models are also used during the teaching stage to predict important image features that appear during task completion.

This publication has 9 references indexed in Scilit: