Incremental Evolution of Complex General Behavior

Abstract
Several researchers have demonstrated how complex action sequences can be learned through neuroevolution (i.e., evolving neural networks with genetic algorithms). However, complex general behavior such as evading predators or avoiding obstacles, which is not tied to specific environments, turns out to be very difficult to evolve. Often the system discovers mechanical strategies, such as moving back and forth, that help the agent cope but are not very effective, do not appear believable, and do not generalize to new environments. The problem is that a general strategy is too difficult for the evolution system to discover directly. This article proposes an approach wherein such complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general. The task transitions are implemented through successive stages of Delta coding (i.e., evolving modifications), which allows even converged populations to adapt to the new task. The method is tested in the stochastic, dynamic task of prey capture and is compared with direct evolution. The incremental approach evolves more effective and more general behavior and should also scale up to harder tasks.