Abstract
Stochastic approximation (SA) has long been applied for problems of minimizing loss functions or root-finding with noisy input information. As with all stochastic search algorithms, there are adjustable algorithm coefficients that must be specified and that can have a profound effect on algorithm performance. It is known that picking these coefficients according to an SA analogue of the deterministic Newton-Raphson algorithm provides an optimal or near-optimal form of the algorithm. This paper presents a general adaptive SA algorithm that is based on an easy method for estimating the Hessian matrix at each iteration while concurrently estimating the primary parameters of interest. The approach applies in both the gradient-free optimization (Kiefer-Wolfowitz) and root-finding/stochastic gradient-based (Robbins-Monro) settings and is based on the "simultaneous perturbation" idea introduced previously.

This publication has 14 references indexed in Scilit: