Back propagation learning equations from the minimization of recursive error
- 1 January 1989
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
A backpropagation learning technique is developed which insures that the network is never far from a solution. It is shown how the concept of minimizing recursive mean square error can be applied under the special restriction of the neural network: changes to the value of a connection require only information about the two nodes which it connects. Use of an order-of-magnitude argument to discard the off-diagonal elements of the second derivative matrix and careful definition of generalized error make it possible for each node to be independent, requiring only knowledge of the node state of nodes connected to it and knowledge of the error and derivative of nodes to which it is connected. Results for a simple exclusive-OR exhibit robust learning at a rate about 100 times faster than conventional backpropagation learning. All problems tried are asymptotic in less than 25 epochs.Keywords
This publication has 2 references indexed in Scilit:
- Mobile Robot Navigation With Ambiguous Sparse LandmarksPublished by SPIE-Intl Soc Optical Eng ,1987
- Parallel Distributed ProcessingPublished by MIT Press ,1986