Leap-frog is a robust algorithm for training neural networks
- 1 February 1999
- journal article
- Published by Taylor & Francis in Network: Computation in Neural Systems
- Vol. 10 (1) , 1-13
- https://doi.org/10.1088/0954-898x/10/1/001
Abstract
Optimization of perceptron neural network classifiers requires an optimization algorithm that is robust. In general, the best network is selected after a number of optimization trials. An effective optimization algorithm generates good weight-vector solutions in a few optimization trial runs owing to its inherent ability to escape local minima, where a less effective algorithm requires a larger number of trial runs. Repetitive training and testing is a tedious process, so that an effective algorithm is desirable to reduce training time and increase the quality of the set of available weight-vector solutions. We present leap-frog as a robust optimization algorithm for training neural networks. In this paper the dynamic principles of leap-frog are described together with experiments to show the ability of leap-frog to generate reliable weight-vector solutions. Performance histograms are used to compare leap-frog with a variable-metric method, a conjugate- gradient method with modified restarts, and a constrained-momentum-based algorithm. Results indicate that leap-frog performs better in terms of classification error than the remaining three algorithms on two distinctly different test problems.Keywords
This publication has 16 references indexed in Scilit:
- Optimal convergence of on-line backpropagationIEEE Transactions on Neural Networks, 1996
- Stabilization and speedup of convergence in training feedforward neural networksNeurocomputing, 1996
- An efficient constrained training algorithm for feedforward networksIEEE Transactions on Neural Networks, 1995
- An efficient constrained learning algorithm with momentum accelerationNeural Networks, 1995
- An accelerated learning algorithm for multilayer perceptrons: optimization layer by layerIEEE Transactions on Neural Networks, 1995
- Gradient descent learning algorithm overview: a general dynamical systems perspectiveIEEE Transactions on Neural Networks, 1995
- An accelerated learning algorithm for multilayer perceptron networksIEEE Transactions on Neural Networks, 1994
- A scaled conjugate gradient algorithm for fast supervised learningNeural Networks, 1993
- First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's MethodNeural Computation, 1992
- What Size Net Gives Valid Generalization?Neural Computation, 1989