GENETIC ALGORITHMS: WHAT FITNESS SCALING IS OPTIMAL?

Abstract
Genetic algorithms are among the most promising optimization techniques. They are based on the following reasonable idea. Suppose that we want to maximize an objective function J(x). We somehow choose the first generation of “individuals” x1, x2, [tdot],xn (i.e., possible values of x) and compute the “fitness” J(xi) of all these individuals. To each individual xi we assign a survival probability pi that is proportional to its fitness. In order to get the next generation we then repeat the following procedure k times: take two individuals at random (i.e., xi with probability pi) and “combine” them according to some rule. For each individual of this new generation, we also compute its fitness (and survival probability), “combine” them to get die third generation, etc. Under certain reasonable conditions, the value of the objective function increases from generation to generation and converges to a maximal value The performance of genetic algorithms can be essentially improved if we MX fitness scaling, i.e., use f(J(xi)) instead of J(xi) as a fitness value, where f(x) is some fixed function that is called a scaling Junction. The efficiency of fitness scaling essentially depends on the choice of f. So what f should we choose? In the present paper we formulate the problem of choosing f as a mathematical optimization problem and solve it under different optimality criteria. As a result, we get a list of functions f that are optimal under these criteria. This list includes both die functions that were empirically proved to be the best for some problems and some new functions that may be worth trying.

This publication has 7 references indexed in Scilit: