Existence and Control of Markov Chains in Systems of Deterministic Motion

Abstract
If the phase space X of a motion $x_{n + 1} = f(x_n )$ is discretized into a space of states $X_1 , \cdots ,X_N $, then probabilities can be assigned to sample paths in the state space so as to coincide with the ones assigned by a finite Markov chain. Theorems 1 and 2 show how the assignment of such probabilities rests on the properties of $f( \cdot )$ and on the construction of the states. Theorems 3 and 4 extend these results to the case in which $x_{n + 1} = f(x_n ,\omega )$, $\omega \in \Omega $ being a random event. Theorems 5 and 6 indicate certain applications relating to stochastic systems in which a decision-maker applies some control action which is fully or partially determined by the observed state of the system.

This publication has 1 reference indexed in Scilit: