Abstract
For several types of finite or infinite dimensional optimization problems the marginal function (or optimal value function) is characterized by different local approximations such as generalized gradients, generalized directional derivatives, directional Hadamard or Dini derivatives. We give estimates for these terms which are determined by multipliers satisfying necessary optimality conditions. When the functions which define the optimization problem are more than once continuously differentiable, then higher order necessary conditions are employed to obtain refined estimates for the marginal function. As a by-product we give a new equivalent formulation of Clarke's multiplier rule for nonsmooth optimization problems. This shows that the set of all multipliers satisfying these necessary conditions is the union of a finite number of closed convex cones.

This publication has 0 references indexed in Scilit: