A View of Unconstrained Optimization.
- 2 October 1987
- report
- Published by Defense Technical Information Center (DTIC)
Abstract
Finding the unconstrained minimizer of a function of more than one variable is an important problem with many practical applications, including data fitting, engineering design, and process control. In addition, techniques for solving unconstrained optimization problems form the basis for most methods for solving constrained optimization problems. This paper surveys the state of the art for solving unconstrained optimization problems and the closely related problem of solving systems of nonlinear equations. First the authors briefly give some mathematical background. Then they discuss Newton's method, the fundamental method underlying most approaches to these problems, as well as the inexact Newton method. The two main practical deficiencies of Newton's method, the need for analytic derivatives and the possible failure to converge to the solution from poor starting points, are the key issues in unconstrained optimization, and the extension of these techniques to solving large, sparse problems. Then this document discusses the main methods used to ensure convergence from poor starting points, line search methods and trust region methods. Also discusses are two rather different approaches to unconstrained optimization, the Nelder-Meade simplex method and conjugate direction methods.Keywords
This publication has 0 references indexed in Scilit: