LEARNING IN NAVIGATION: GOAL FINDING IN GRAPHS

Abstract
A robotic agent operating in an unknown and complex environment may employ a search strategy of some kind to perform a navigational task such as reaching a given goal. In the process of performing the task, the agent can attempt to discover characteristics of its environment that enable it to choose a more efficient search strategy for that environment. If the agent is able to do this, we can say that it has "learned to navigate" — i.e., to improve its navigational performance. This paper describes how an agent can learn to improve its goal-finding performance in a class of discrete spaces, represented by graphs embedded in the plane. We compare several basic search strategies on two different classes of "random" graphs and show how information collected during the traversal of a graph can be used to classify the graph, thus allowing the agent to choose the search strategy best suited for that graph.

This publication has 0 references indexed in Scilit: