Discretization procedures for adaptive Markov control processes
- 1 February 1989
- journal article
- research article
- Published by Elsevier in Journal of Mathematical Analysis and Applications
- Vol. 137 (2) , 485-514
- https://doi.org/10.1016/0022-247x(89)90259-x
Abstract
No abstract availableThis publication has 15 references indexed in Scilit:
- Continuous dependence of stochastic control models on the noise distributionApplied Mathematics & Optimization, 1988
- Adaptive policies for discrete-time stochastic control systems with unknown disturbance distributionSystems & Control Letters, 1987
- An Approach to Discrete-Time Stochastic Control Problems under Partial ObservationSIAM Journal on Control and Optimization, 1987
- Approximation and bounds in discrete event dynamic programmingIEEE Transactions on Automatic Control, 1986
- Adaptive control of discounted Markov decision chainsJournal of Optimization Theory and Applications, 1985
- Strongly consistent estimation in a controlled Markov renewal modelJournal of Applied Probability, 1982
- Nonstationary Markov decision problems with converging parametersJournal of Optimization Theory and Applications, 1981
- Empirical Processes: A Survey of Results for Independent and Identically Distributed Random VariablesThe Annals of Probability, 1979
- Optimal Plans for Dynamic Programming ProblemsMathematics of Operations Research, 1976
- Convergence of discretization procedures in dynamic programmingIEEE Transactions on Automatic Control, 1975