Abstract
We study deterministic optimal control problems having stopping time, continuous and impulse controls in each strategy. We obtain the optimal cost, considered as the maximum element of a suitable set of subsolutions of the associated Hamilton–Jacobi equation, using an approximation method. A particular derivative discretization scheme is employed. Convergence of approximate solutions is shown taking advantage of a discrete maximum principle which is also proved. For the numerical solutions of approximate problems we use a method of relaxation type. The algorithm is very simple; it can be run on computers with small central memory. In Part I we study the stationary case, in Part II [SIAM J. Control Optim., 23 (1985), pp. 267–285] we study the nonstationary case.

This publication has 2 references indexed in Scilit: