Near–Optimal Feedback Stabilization of a Class of Nonlinear Singularly Perturbed Systems

Abstract
A new series expansion method is developed for a class of nonlinear singularly perturbed optimal regulator problems. The resulting feedback control is near-optimal and can stabilize essentially nonlinear systems when linearized models provide no stability information. The stability domain is shown to include large initial conditions of the fast variables. The control law is implemented in two-time-scales, with the feedback from the fast state variables depending on slow state variables as parameters. The coefficients of the formal expansions of the optimal value function are obtained from equations involving only the slow variables.