On The Marginal Function in Nonlinear Programming
- 1 May 1984
- journal article
- Published by Institute for Operations Research and the Management Sciences (INFORMS) in Mathematics of Operations Research
- Vol. 9 (2) , 208-221
- https://doi.org/10.1287/moor.9.2.208
Abstract
For several types of finite or infinite dimensional optimization problems the marginal function (or optimal value function) is characterized by different local approximations such as generalized gradients, generalized directional derivatives, directional Hadamard or Dini derivatives. We give estimates for these terms which are determined by multipliers satisfying necessary optimality conditions. When the functions which define the optimization problem are more than once continuously differentiable, then higher order necessary conditions are employed to obtain refined estimates for the marginal function. As a by-product we give a new equivalent formulation of Clarke's multiplier rule for nonsmooth optimization problems. This shows that the set of all multipliers satisfying these necessary conditions is the union of a finite number of closed convex cones.Keywords
This publication has 0 references indexed in Scilit: