Extension of the ALVINN-architecture for robust visual guidance of a miniature robot

Abstract
Extensions of the ALVINN-Architecture are introduced for a KHEPERA-miniature robot to navigate visually robust in a labyrinth. The reimplemantation of the ALVINN- approach demonstrates, that also in indoor-environments a complex visual robot navigation is achievable using a di- rect input-output-mapping with a multilayer perceptron net- work, which is trained by expert-cloning. With the exten- sions it succeeds to overcome the restrictions of the small vi- sual field of the camera by completing the input vector with history-components, intrduction of the velocity dimension and evaluation of the network's output by a dynamic neural field. This creates the prerequisites to take turns which are no longer visible in the actual image and so make use of several alternatives of actions (f.e. at crossings).

This publication has 5 references indexed in Scilit: