Abstract
We show that the discrete-time disturbance rejection problem, formulated in finite and infinite horizons, and under perfect state measurements, can be solved by making direct use of some results on linear-quadratic zero-sum dynamic games. For the finite-horizon problem an optimal (minimax) controller exists (in contrast with the continuous-time H-infinity control problem), and can be expressed in terms of a generalized (time-varying) discrete-time Riccati equation. The existence of an optimum also holds in the infinite-horizon case, under an appropriate observability condition, with the optimal control, given in terms of a generalized algebraic Riccati equation, also being stabilizing. In both cases, the corresponding worst-case disturbances turn out to be correlated random sequences with discrete distributions, which means that the problem (viewed as a dynamic game between the controller and the disturbance) does not admit a pure-strategy saddle point. The paper also presents results for the delayed state measurement and the nonzero initial state cases. Furthermore, it formulates a stochastic version of the problem, where the disturbance is a partially stochastic process with fixed higher order moments (other than the mean). In this case, the minimax controller depends on the energy bound of the disturbance, provided that it is below a certain threshold. Several numerical studies included in the paper illustrate the main results.

This publication has 18 references indexed in Scilit: