The Optimal Strategy in the Control Problem Associated with the Hamilton–Jacobi–Bellman Equation
- 1 March 1980
- journal article
- Published by Society for Industrial & Applied Mathematics (SIAM) in SIAM Journal on Control and Optimization
- Vol. 18 (2) , 191-198
- https://doi.org/10.1137/0318014
Abstract
Consider the Hamilton–Jacobi–Bellman equation $\max _m \{ A_m u(x) - f_m (x)\} = 0$ a.e. in $R^n $, where $A_m (m = 1,2, \cdots )$ are the infinitesimal generators of diffusion processes with constant coefficients and with discount $c_m \geqq \alpha > 0$. It is known that the solution can be represented as the optimal cost functional in which one can switch from one stochastic system to another without penalty. In this paper it is shown that if, for some k, $A_k f_m (x) - A_m f_k (x) \geqq c > 0$ for all $m \ne k$, $| x | > R$, then $A_k u(x) - f_k (x) = 0$ if $| x | > R_1 $ for some $R_1 $ sufficiently large; that means that the optimal strategy when $| x | > R_1 $ is to stay with the diffusion and cost associated with $A_k $, $f_k $.
Keywords
This publication has 3 references indexed in Scilit:
- Optimal stochastic switching and the Dirichlet problem for the Bellman equationTransactions of the American Mathematical Society, 1979
- ON CONTROL OF THE SOLUTION OF A STOCHASTIC INTEGRAL EQUATION WITH DEGENERATIONMathematics of the USSR-Izvestiya, 1972
- Control of a Solution of a Stochastic Integral EquationTheory of Probability and Its Applications, 1972