Iterative aggregation for solving undiscounted semi-markovian reward processes
- 1 January 1986
- journal article
- research article
- Published by Taylor & Francis in Communications in Statistics. Stochastic Models
- Vol. 2 (1) , 1-41
- https://doi.org/10.1080/15326348608807023
Abstract
The value-determination equations for single-chain undiscounted Markov renewal reward processes, v = q - gT + Pv, are solved here by an iterative algorithm which alternates between computing aggregate values defined on blocks of states, and computing disaggregated values within each block in terms of the aggregate values of other blocks. For large problems with exploitable structure, the method appears promising as an alternative to successive approximations, with potential advantages of reduced computer time, reduced, main memory requirements, greater insensitivity to the starting point and superior asymptotic rate of convergence.Keywords
This publication has 19 references indexed in Scilit:
- Aggregation/Disaggregation for Eigenvalue ProblemsSIAM Journal on Numerical Analysis, 1984
- A local convergence proof for the iterative aggregation methodLinear Algebra and its Applications, 1983
- Acceleration by aggregation of successive approximation methodsLinear Algebra and its Applications, 1982
- Application of the representation theory of finite groups to field computation problems with symmetrical boundariesIEEE Transactions on Magnetics, 1982
- An Iterative Aggregation Procedure for Markov Decision ProcessesOperations Research, 1982
- Reduction of Network States Under SymmetriesBell System Technical Journal, 1978
- Discounting, Ergodicity and Convergence for Markov Decision ProcessesManagement Science, 1977
- Technical Note—On the Asymptotic Convergence Rate of Cost Differences for Markovian Decision ProcessesOperations Research, 1971
- Technical Note—Bounds on the Gain of a Markov Decision ProcessOperations Research, 1971
- Markov-Renewal Programming. I: Formulation, Finite Return ModelsOperations Research, 1963