Abstract
A new way of building control systems, known as behavior-based robotics, has recently been proposed to overcome the difficulties of the traditional artificial intelligence approach to robotics. This new approach is based on the idea of providing the robot with a range of simple behaviors and letting the environment determine which behavior should have control at any given time. We will present a set of experiments in which neural networks with different architectures have been trained to control a mobile robot designed to keep an arena clear by picking up trash objects and releasing them outside the arena. Controller weights are selected using a form of genetic algorithm and do not change during the lifetime (i.e., no learning occurs). We will compare, in simulation and on a real robot, five different network architectures and will show that a network that allows for fine-grained modularity achieves significantly better performance. By comparing the functionality of each network module and its interaction with a description of the simple behavior components, we will show that it is not possible to find simple correlations; rather, module switching and interaction are correlated with low-level sensorimotor mappings. This implies that the engineering-oriented approach to behavior-based robotics might have serious limitations because it is difficult to know in advance the appropriate mappings between behavior components and sensorimotor activity for complex tasks.

This publication has 11 references indexed in Scilit: