Abstract
This paper systematically surveys the development of stochastic quasigradient (SQG) methods.These methods make it possible to solve optimization problems without calculating the precise valuesw of objectives and constraints (let alone of their derivatives). For deterministic nonlinear optimization problems, these methods can be regarded as methods of random search. For stochastic programming problems. SQG methods generalize the well-known stochastic approximation methods for unconstrained optimization of the expectation of a random function to problems involving general constraints.