Partially Observed Control of Markov Processes. III
Open Access
- 1 July 1990
- journal article
- Published by Institute of Mathematical Statistics in The Annals of Probability
- Vol. 18 (3) , 1099-1125
- https://doi.org/10.1214/aop/1176990737
Abstract
Let $\nu$ denote the value function of a partially observed control problem. If $\nu$ is once differentiable in a certain direction $\hat{B}$, then optimal controls are characterized by a feedback involving the directional derivative $\hat{B}\nu$. It is also shown that $\nu$ satisfies the corresponding Bellman equation, an infinite-dimensional PDE on the space of measures, in the viscosity sense of Crandall and Lions.
Keywords
This publication has 0 references indexed in Scilit: