On Deterministic Control Problems: an Approximation Procedure for the Optimal Cost II. The Nonstationary Case
- 1 March 1985
- journal article
- Published by Society for Industrial & Applied Mathematics (SIAM) in SIAM Journal on Control and Optimization
- Vol. 23 (2) , 267-285
- https://doi.org/10.1137/0323019
Abstract
We study deterministic optimal control problems having stopping time, continuous and impulse controls in each strategy. We obtain the optimal cost, considered as the maximum element of a suitable set of subsolutions of the associated Hamilton-Jacobi equation, using an approximation method. A particular derivative discretization scheme is employed. Convergence of approximate solutions is shown taking advantage of a discrete maximum principle which is also proved. For the numerical solutions of approximate problems we use a method of relaxation type. The algorithm is very simple; it can be run on computers of small central memory. In Part I [SIAM J. Control Optim., 23 (1985), pp. 242–266] we studied the stationary case; in Part II we study the nonstationary case and we apply our results to a short-run model of energy production management.Keywords
This publication has 4 references indexed in Scilit:
- An algorithm to obtain the maximum solution of the Hamilton-Jacobi equationPublished by Springer Nature ,2005
- Approximation numérique des équations Hamilton-Jacobi-BellmanRAIRO. Analyse numérique, 1980
- Contribution of stochastic control singular perturbation averaging and team theories to an example of large-scale systems: Management of hydropower productionIEEE Transactions on Automatic Control, 1978
- Deterministic and Stochastic Optimal ControlPublished by Springer Nature ,1975