A rationalized error back-propagation learning algorithm
- 1 January 1989
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 373-380 vol.2
- https://doi.org/10.1109/ijcnn.1989.118725
Abstract
A method is proposed for learning in multilayer perceptrons. It includes self-adapting features that make it suited to deal with a variety of problems without the need for parameter readjustments. The validity of the approach is benchmarked for two types of problems. The first benchmark is performed for the topologically complex parity problem. The number of binary inputs range from, 2 (simplest Exclusive OR problem) to 7 (much more complex problem). The statistically averaged learning times obtained, which are reduced by two to three orders of magnitude, are compared with the best possible results obtained by conventional error backpropagation (EBP). The second problem type occurs when a high accuracy in separating example classes is needed. This corresponds to instances where different output sign patterns of the MLP are requested for slightly different input variables. When the minimum Euclidean distance epsilon between the classes to be separated decreases, the best learning times obtained with conventional EBP increase roughly as I/ epsilon /sup 2/. The present algorithm yields substantially shorter learning times that behave like log (1/ epsilon ).Keywords
This publication has 4 references indexed in Scilit:
- Learning on VLSI: a general purpose digital neurochipPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989
- Neural nets for adaptive filtering and adaptive pattern recognitionComputer, 1988
- Increased rates of convergence through learning rate adaptationNeural Networks, 1988
- An introduction to computing with neural netsIEEE ASSP Magazine, 1987