On Deterministic Control Problems: An Approximation Procedure for the Optimal Cost I. The Stationary Problem
- 1 March 1985
- journal article
- Published by Society for Industrial & Applied Mathematics (SIAM) in SIAM Journal on Control and Optimization
- Vol. 23 (2) , 242-266
- https://doi.org/10.1137/0323018
Abstract
We study deterministic optimal control problems having stopping time, continuous and impulse controls in each strategy. We obtain the optimal cost, considered as the maximum element of a suitable set of subsolutions of the associated Hamilton–Jacobi equation, using an approximation method. A particular derivative discretization scheme is employed. Convergence of approximate solutions is shown taking advantage of a discrete maximum principle which is also proved. For the numerical solutions of approximate problems we use a method of relaxation type. The algorithm is very simple; it can be run on computers with small central memory. In Part I we study the stationary case, in Part II [SIAM J. Control Optim., 23 (1985), pp. 267–285] we study the nonstationary case.Keywords
This publication has 2 references indexed in Scilit:
- An algorithm to obtain the maximum solution of the Hamilton-Jacobi equationPublished by Springer Nature ,2005
- Deterministic and Stochastic Optimal ControlPublished by Springer Nature ,1975