Nonquadratic cost and nonlinear feedback control

Abstract
Nonlinear controllers offer significant advantages over linear controllers in a variety of circumstances. Hence there has been significant interest in extending linear‐quadratic synthesis to nonlinear‐nonquadratic problems. The purpose of this paper is to review the current status of such efforts and to present, in a simplified and tutorial manner, some of the basic ideas underlying these results. Our approach focuses on the role of the Lyapunov function in guaranteeing stability for autonomous systems on an infinite horizon. Sufficient conditions for optimality are given in a form that corresponds to a steady‐state version of the Hamilton‐Jacobi‐Bellman equation. These results are used to provide a simplified derivation of the nonlinear feedback controller obtained by Bass and Webber (1966)38and to obtain a deterministic variation of the stochastic nonlinear feedback controller developed by Speyer (1976).45.

This publication has 54 references indexed in Scilit: