Minimisation methods for training feedforward neural networks
- 1 January 1994
- journal article
- Published by Elsevier in Neural Networks
- Vol. 7 (1) , 1-11
- https://doi.org/10.1016/0893-6080(94)90052-3
Abstract
No abstract availableKeywords
This publication has 9 references indexed in Scilit:
- First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's MethodNeural Computation, 1992
- Optimization for training neural netsIEEE Transactions on Neural Networks, 1992
- Comparison and evaluation of variants of the conjugate gradient method for efficient learning in feed-forward neural networks with backward error propagationNetwork: Computation in Neural Systems, 1992
- A Scaled Conjugate Gradient Algorithm for Fast Supervised LearningDAIMI Report Series, 1990
- Multilayer feedforward networks are universal approximatorsNeural Networks, 1989
- Real-time application of neural networks for sensor-based control of robots with visionIEEE Transactions on Systems, Man, and Cybernetics, 1989
- Learning representations by back-propagating errorsNature, 1986
- Restart procedures for the conjugate gradient methodMathematical Programming, 1977
- Generation and Use of Orthogonal Polynomials for Data-Fitting with a Digital ComputerJournal of the Society for Industrial and Applied Mathematics, 1957