Abstract
This article is a theoretical study of the method of Davidon, modified by Flechter and Powell, used for minimization without constraints of an arbitrary function in an n-dimensional euclidean space. After a general overview of convergence of the methods of minimization and presentation of a few lemmas, we investigate the algorithms in the case of a quadratic function. We list properties already announced or proved in various papers but we give a proof of a more general convergence. In the non-quadratic case, because we could not prove the original algorithm, we propose two modifications that led to a simple proof of the convergence with some weak assumptions. On the other hand, it is possible to show that if f is strictly convex and twice continuously differentiable, the approximation of the inverse of the jacobian of the gradient, as generated by the algorithm, actually converges to this inverse. Some numerical results obtained by this modified method are presented and compared to the results obtained by other known methods. Finally, we realize that there may be an interest to apply this algorithm to resolve mathematical programmes by methods of penalty functions.

This publication has 0 references indexed in Scilit: