Vision-guided mobile robot navigation using retroactive updating of position uncertainty

Abstract
The authors describe a vision-guided mobile robot navigation that allows a mobile robot to navigate indoors at speeds of approximately 10 m/min in the presence of obstacles. This system uses empirically constructed models of the dependence of robot-motion-uncertainties on commanded motions for vision-based self-location and for the planning of future motions toward the goal. The vision processes are model-based and use a Kalman filter to reduce the uncertainties in the position of the robot as the landmarks in the scene are matched with features extracted from the images. Position updating using vision is retroactive in the sense that the robot does not wait for the results of vision processing to become available. When the vision results do become available, the robot then retroactively updates the positional uncertainties. In addition to this retroactive updating feature, another major difference between the FINALE system of Kosaka and Kak (1992) and the work reported here is that now the authors are able to execute multiple tasks concurrently.<>

This publication has 8 references indexed in Scilit: