Abstract
A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which compactly expresses the controller as a linear combination of the measurement history. Furthermore, the controller requires the solution to two Riccati differential equations (RDE). For the linear saddle strategy of the controller necessary and sufficient conditions for the saddle point to be strictly concave with respect to all disturbances and initial conditions, and sufficient conditions for various process disturbance strategies to satisfy the saddle point condition are given. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H infinity norm bound.