The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks
- 15 February 1996
- journal article
- Published by MIT Press in Neural Computation
- Vol. 8 (2) , 451-460
- https://doi.org/10.1162/neco.1996.8.2.451
Abstract
The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.Keywords
This publication has 7 references indexed in Scilit:
- Fast backpropagation learning using steep activation functions and automatic weight reinitializationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Equivalence relation between the back propagation learning process of an FNN and that of an FNNGNeural Networks, 1994
- An iterative method for training multilayer networks with threshold functionsIEEE Transactions on Neural Networks, 1994
- Speed up learning and network optimization with extended back propagationNeural Networks, 1993
- Avoiding false local minima by proper initialization of connectionsIEEE Transactions on Neural Networks, 1992
- Benefits of gain: speeded learning and minimal hidden layers in back-propagation networksIEEE Transactions on Systems, Man, and Cybernetics, 1991
- Analysis of Neural Networks with RedundancyNeural Computation, 1990