Minimization of strictly convex functions: an improved optimality test based on Fenchel duality

Abstract
Many estimation problems in signal or image processing lead to a final step, being the minimization of a cost function. When this function is strictly convex, the usual stopping criterion is the nullity of the gradient: the solution is reached when the gradient vanishes. As this is numerically out of reach, the actual stopping test consists of comparing a norm of the gradient with a threshold. In doing so, the iterations may be stopped as the current point is still far away from the solution or as the solution has long been reached. Whatever the value of the threshold, one has no idea of the discrepancy between the current point and the effective solution. Taking advantage of the Fenchel dual formulation of the problem, here we propose a much more sensitive stopping test. This test overcomes the problems caused by the comparison of a norm of the gradient to a threshold. To illustrate our claim, the test is implemented on a signal processing simulation example.

This publication has 8 references indexed in Scilit: