Omnidirectional visual information for navigating a mobile robot

Abstract
A method for acquiring relative positions between observation points using omnidirectional views (ODVs) is presented. The method does not require any internal sensor data of the mobile robot. As a result, the robot can move in the environment without any constraints of observation. A method for obtaining the absolute positions and the global map by fusing local maps acquired with omnidirectional range information is also proposed. The global map obtained, however, is not sufficiently precise to recognize the environment. It will be necessary to get a much more precise global map by using other visual information including in ODVs, such as color and shape of objects.<>

This publication has 4 references indexed in Scilit: