Perturbation theory for unbounded Markov reward processes with applications to queueing
- 1 March 1988
- journal article
- Published by Cambridge University Press (CUP) in Advances in Applied Probability
- Vol. 20 (1) , 99-111
- https://doi.org/10.2307/1427272
Abstract
Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.Keywords
This publication has 6 references indexed in Scilit:
- Perturbation theory for Markov reward processes with applications to queueing systemsAdvances in Applied Probability, 1988
- On the Overflow Process from a Finite Markovian QueuePerformance Evaluation, 1984
- The Condition of a Finite Markov Chain and Perturbation Bounds for the Limiting ProbabilitiesSIAM Journal on Algebraic Discrete Methods, 1980
- Approximations of Dynamic Programs, IMathematics of Operations Research, 1978
- Perturbation theory and finite Markov chainsJournal of Applied Probability, 1968
- Perturbation theory and finite Markov chainsJournal of Applied Probability, 1968