Omnivision-based probabilistic self-localization for a mobile shopping assistant continued

Abstract
The basic idea of our omniview-based MCL approach and preliminary experimental results were presented in our previous paper [Proc. IROS 2002, pp. 256-262]. In continuing, this paper describes a number of methodical and technical improvements addressing challenges arising from the characteristics of our real-world application, the vision-based self-localization of a mobile robot that acts as a shopping assistant in the maze-like environment of a home store. To cope with highly variable illumination conditions, we present a reference-based correction approach that realizes a robust, automatic luminance stabilization and color adaptation already at the level of image formation. To deal with severe occlusions or disturbances of the omnidirectional image caused by, e.g. people standing near the robot or local illumination artifacts, we introduce a novel selective observation comparison method as prerequisite for a robust particle filter update. Further studies investigate the impact of the utilized observation model on the localization accuracy. The results of a series of localization experiments carried out in the home store confirm the robustness and superiority of our advanced, real-time approach.

This publication has 8 references indexed in Scilit: