Iterative aggregation for solving undiscounted semi-markovian reward processes

Abstract
The value-determination equations for single-chain undiscounted Markov renewal reward processes, v = q - gT + Pv, are solved here by an iterative algorithm which alternates between computing aggregate values defined on blocks of states, and computing disaggregated values within each block in terms of the aggregate values of other blocks. For large problems with exploitable structure, the method appears promising as an alternative to successive approximations, with potential advantages of reduced computer time, reduced, main memory requirements, greater insensitivity to the starting point and superior asymptotic rate of convergence.