A Generalized Discrete Dynamic Programming Model

Abstract
This paper considers a stationary discrete dynamic programming model that is a generalization of the finite state and finite action Markov programming problem. We specify conditions under which an optimal stationary linear decision rule exists and show how this optimal policy can be calculated using linear programming, policy iteration, or value iteration. In addition we allow the parameters of the problem to be random variables and indicate when the expected values or these random variables are certainty equivalents.

This publication has 0 references indexed in Scilit: