What can be learned from human reach-to-grasp movements for the design of robotic hand-eye systems?

Abstract
In the field of robot motion control, visual servoing has been proposed as the suitable strategy to cope with imprecise models and calibration errors. Remaining problems such as the necessity of a high rate of visual feedback are deemed to be solvable by the development of real-time vision modules. However, human grasping, which still outshines its robotic counterparts especially with respect to robustness and flexibility, definitely requires only sparse, asynchronous visual feedback. We therefore examined current neuroscientific models for the control of human reach-to-grasp movements with the emphasis lying on the visual control strategy used. From this, we developed a control model that unifies the two robotic strategies look-then-move and visual servoing, thereby compensating the problems that each strategy shows when used alone.