On the Convergence of Random Search Algorithms In Continuous Time with Applications to Adaptive Control
- 1 January 1973
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Systems, Man, and Cybernetics
- Vol. SMC-3 (1) , 62-66
- https://doi.org/10.1109/tsmc.1973.5408578
Abstract
The use of a gradient search algorithm for the computation of the minimum of a function is well understood. In particular, in continuous time, the algorithm can be formulated as a differential equation whose equilibrium solution is a local minimum of the function. Khas'minskii has proposed using the continuous gradient algorithm with a white-noise driving term. He shows, using an argument on the convergence of the probability density function, that the equilibrium solution of this differential equation is the global minimum of the function. This paper reviews that result from the point of view of the theory of diffusion processes. It is shown that the conclusion of Khas'minskii is not correct, and that in fact the operation of the stochastic gradient search is the same as the operation of the noise-free gradient search. In fact, the search with noise converges with probability one to any of the local minima in the same way as the noise-free search converges to a local minimum. Several stochastic adaptive control systems use random search algorithms for their operation. One of these, due to Barron, is analyzed to show that it meets the conditions for convergence imposed by the results which have been derived here.Keywords
This publication has 2 references indexed in Scilit:
- A Rapidly Convergent Descent Method for MinimizationThe Computer Journal, 1963
- On stochastic differential equationsMemoirs of the American Mathematical Society, 1951