Abstract
Sufficient conditions are presented for a Markov decision process to have a myopic optimum and for a stochastic game to possess a myopic equilibrium point. An optimum (or an equilibrium point) is said to be “myopic” if it can be deduced from an optimum (or an equilibrium point) of a static optimization problem (or a static [Nash] game). The principal conditions are (a) each single period reward is the sum of terms due to the current state and action, (b) each transition probability depends on the action taken but not on the state from which the transition occurs, and (c) an appropriate static optimum (or equilibrium point) is ad infinitum repeatable. These conditions are satisfied by several dynamic oligopoly models and numerous Markov decision processes.

This publication has 0 references indexed in Scilit: