Stabilization of jump linear gaussian systems without mode observations
- 1 July 1996
- journal article
- research article
- Published by Taylor & Francis in International Journal of Control
- Vol. 64 (4) , 631-661
- https://doi.org/10.1080/00207179608921647
Abstract
Systems, such as those subject to abrupt changes (including failure) or those with uncertain dynamic model (or more than one possible model), can be naturally modelled as jump linear (JL) systems. Because of their applications in fields such as tracking, fault-tolerant control, manufacturing process and robots, JL systems have drawn extensive attention. The optimal control and stabilization problem for JL systems, when the mode (system model) is not assumed to be directly and perfectly observed, a realistic assumption in many applications, is nonlinear and prohibitive both analytically and computationally because of the dual effect. The main contribution of this work is the sufficient condition for stabilization for a class of adaptive controllers when the mode is not directly observed. We first present the optimal controller under an assumption of a certain type of mode availability. Using this optimal feedback gain, we derive a condition that ensures the stabilizing property for a class of adaptive controllers without direct knowledge of the mode. Two specific adaptive controllers (maximum a posteriori and averaging) are examined in detail and their stabilizing property is proved. An algorithm to compute the optimal feedback gain and its convergence are presented. Examples show that the performance of the adaptive controllers without mode observations derived here is very close to that of the optimal controller with known modes.Keywords
This publication has 18 references indexed in Scilit:
- Adaptive control for jump parameter systems via nonlinear filteringPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- On the adaptive stabilization of linear stochastic systems with jump process parametersPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Stabilizing control law for hybrid modelsIEEE Transactions on Automatic Control, 1994
- Control of discrete-time hybrid stochastic systemsIEEE Transactions on Automatic Control, 1992
- On a partially observable LQG problem for systems with Markovian jumping parametersSystems & Control Letters, 1988
- Optimal adaptive LQG control for systems with finite state process parametersIEEE Transactions on Automatic Control, 1985
- Feedback control of a class of linear discrete systems with jump parameters and quadratic cost criteria †International Journal of Control, 1975
- Dual effect, certainty equivalence, and separation in stochastic controlIEEE Transactions on Automatic Control, 1974
- Adaptive control of linear stochastic systemsAutomatica, 1973
- On the discrete time matrix Riccati equation of optimal control†International Journal of Control, 1970