Training neural nets with the reactive tabu search
- 1 January 1995
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 6 (5) , 1185-1200
- https://doi.org/10.1109/72.410361
Abstract
In this paper the task of training subsymbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the reactive tabu search (RTS). An iterative optimization process based on a "modified local search" component is complemented with a meta-strategy to realize a discrete dynamical system that discourages limit cycles and the confinement of the search trajectory in a limited portion of the search space. The possible cycles are discouraged by prohibiting (i.e., making tabu) the execution of moves that reverse the ones applied in the most recent part of the search. The prohibition period is adapted in an automated way. The confinement is avoided and a proper exploration is obtained by activating a diversification strategy when too many configurations are repeated excessively often. The RTS method is applicable to nondifferentiable functions, is robust with respect to the random initialization, and effective in continuing the search after local minima. Three tests of the technique on feedforward and feedback systems are presented.Keywords
This publication has 29 references indexed in Scilit:
- Local search with memory: benchmarking RTSOR Spectrum, 1995
- Democracy in neural nets: Voting schemes for classificationNeural Networks, 1994
- COJETS 6.23: a Monte Carlo simulation program for p-p, p-p collisions and e+e- annihilationComputer Physics Communications, 1992
- First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's MethodNeural Computation, 1992
- Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer NetworksNeural Computation, 1991
- On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward NetworksNeural Computation, 1991
- Approximation of Boolean Functions by Sigmoidal Networks: Part I: XOR and Other Two-Variable FunctionsNeural Computation, 1989
- Tabu search techniquesOR Spectrum, 1989
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Optimization by Simulated AnnealingScience, 1983