Abstract
This paper, discusses some research issues related to the general topic of optimizing a stochastic system via simulation. In particular, we devote extensive attention to finite difference estimators of objective function gradients and present a number of new limit theorems. We also discuss a new family of orthogonal function approximations to the global behavior of the objective function. We show that if the objective function is sufficiently smooth, the convergence rate can be made arbitrarily close to n to the minus half power in the number of observations required. The paper concludes with a brief discussion of how these ideas can be integrated into an optimization algorithm.

This publication has 0 references indexed in Scilit: