Convergence analysis of smoothed stochastic gradient-type algorithm
- 1 January 1987
- journal article
- research article
- Published by Taylor & Francis in International Journal of Systems Science
- Vol. 18 (6) , 1061-1078
- https://doi.org/10.1080/00207728708964032
Abstract
Stochastic gradient (SG) algorithms are commonly used mainly because of their simplicity and ease of implementation. However, their performance, both in terms of convergence rate and steady-state performance, is often unsatisfactory. While maintaining the basic simplicity of the gradient methods, the smoothed stochastic gradient (SSG) algorithm includes some additional processing of the data. There are strong indications that the additional processing results in many cases in improved performance. However, the convergence of this algorithm remained an open problem. In this paper we present a rigorous analysis which concludes, under very mild assumptions on the data, that the algorithm converges almost everywhere. The main tool of our analysis is the so-called ‘associated differential equation’ and we make use of a related theorem introduced by Kushner and Clark.Keywords
This publication has 4 references indexed in Scilit:
- Applications of a Kushner and Clark lemma to general classes of stochastic algorithmsIEEE Transactions on Information Theory, 1984
- Convergence of an adaptive linear estimation algorithmIEEE Transactions on Automatic Control, 1984
- Stochastic Approximation Methods for Constrained and Unconstrained SystemsPublished by Springer Nature ,1978
- Analysis of recursive stochastic algorithmsIEEE Transactions on Automatic Control, 1977