Optimal Control Systems with Stochastic Boundary Conditions and State Equations
- 1 August 1972
- journal article
- Published by Institute for Operations Research and the Management Sciences (INFORMS) in Operations Research
- Vol. 20 (4) , 875-887
- https://doi.org/10.1287/opre.20.4.875
Abstract
We consider here control problems of optimization with stochastic boundary conditions that are obtained by allowing; boundary data and certain constraints to depend upon a random variable π = π(a), α ϵ I1π continuous over I1. We also allow the state equations to depend upon a random variable or a stochastic process η = η(t, b), b ϵ I2, with suitably smooth sample paths. We take for admissible controls measurable functions u = u(t) of t only with values in a compact control set U(t) ⊂ Em. Hence the admissible trajectories x = x(t, a, b) are stochastic processes, that is, for each t, x is a random variable over I = I1 × I2 with respect to a given probability measure P over I. We take as our cost functional Ek[t2, x(t2, a, b)] where E denotes expectation with respect to P. We state an existence theorem for such systems, and give necessary conditions that an optimal pair x0, u0 must satisfy. We also discuss applications of these results to various models in operations research.Keywords
This publication has 0 references indexed in Scilit: