On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks
- 1 June 1991
- journal article
- Published by MIT Press in Neural Computation
- Vol. 3 (2) , 226-245
- https://doi.org/10.1162/neco.1991.3.2.226
Abstract
We consider the problem of training a linear feedforward neural network by using a gradient descent-like LMS learning algorithm. The objective is to find a weight matrix for the network, by repeatedly presenting to it a finite set of examples, so that the sum of the squares of the errors is minimized. Kohonen showed that with a small but fixed learning rate (or stepsize) some subsequences of the weight matrices generated by the algorithm will converge to certain matrices close to the optimal weight matrix. In this paper, we show that, by dynamically decreasing the learning rate during each training cycle, the sequence of matrices generated by the algorithm will converge to the optimal weight matrix. We also show that for any given ∊ > 0 the LMS algorithm, with decreasing learning rates, will generate an ∊-optimal weight matrix (i.e., a matrix of distance at most ∊ away from the optimal matrix) after O(1/∊) training cycles. This is in contrast to Ω(1/∊log 1/∊) training cycles needed to generate an ∊-optimal weight matrix when the learning rate is kept fixed. We also give a general condition for the learning rates under which the LMS learning algorithm is guaranteed to converge to the optimal weight matrix.Keywords
This publication has 9 references indexed in Scilit:
- Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network ModelsJournal of the American Statistical Association, 1989
- Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network ModelsJournal of the American Statistical Association, 1989
- Asymptotic Convergence of BackpropagationNeural Computation, 1989
- Adaptive filters with constraints and correlated non-stationary signalsSystems & Control Letters, 1988
- Increased rates of convergence through learning rate adaptationNeural Networks, 1988
- Analysis of hidden units in a layered network trained to classify sonar targetsNeural Networks, 1988
- An Adaptive Associative Memory PrincipleIEEE Transactions on Computers, 1974
- On Asymptotic Normality in Stochastic ApproximationThe Annals of Mathematical Statistics, 1968
- Asymptotic Distribution of Stochastic Approximation ProceduresThe Annals of Mathematical Statistics, 1958