Discrete Dynamic Programming and Viscosity Solutions of the Bellman Equation
- 1 November 1989
- journal article
- Published by European Mathematical Society - EMS - Publishing House GmbH in Annales de l'Institut Henri Poincaré C, Analyse non linéaire
- Vol. 6, 161-183
- https://doi.org/10.1016/s0294-1449(17)30020-3
Abstract
This paper presents a technique for approximating the viscosity solution of the Bellman equation in deterministic control problems. This technique, based on discrete dynamic programming, leads to monotonically converging schemes and allows to prove a priori error estimates. Several computational algorithms leading to monotone convergence are reviewed and compared.This publication has 12 references indexed in Scilit:
- Uniqueness of viscosity solutions of Hamilton-Jacobi equations revisitedJournal of the Mathematical Society of Japan, 1987
- A numerical approach to the infinite horizon problem of deterministic control theoryApplied Mathematics & Optimization, 1987
- Deterministic Impulse Control ProblemsSIAM Journal on Control and Optimization, 1985
- Two approximations of solutions of Hamilton-Jacobi equationsMathematics of Computation, 1984
- Viscosity solutions of Hamilton-Jacobi equationsTransactions of the American Mathematical Society, 1983
- Existence de Solution et Algorithme de Résolution Numérique, de Problème de Contrôle Optimal de Diffusion Stochastique Dégénérée ou NonSIAM Journal on Control and Optimization, 1980
- An explicit procedure for discretizing continuous, optimal control problemsJournal of Optimization Theory and Applications, 1971
- Mathematical programming and the control of Markov chains†International Journal of Control, 1971
- Accelerated procedures for the solution of discrete Markov control problemsIEEE Transactions on Automatic Control, 1971
- A dynamic programming successive approximations technique with convergence proofsAutomatica, 1970