Abstract
This paper describes, and analyzes, a method of successive approximations for finding critical points of a function which can be written as the difference of two convex functions. The method is based on using a non-convex duality theory. At each iteration one solves a convex, optimization problem. This alternates between the primal and the dual variables. Under very general structural conditions on the problem, we prove that the resulting sequence is a descent sequence, which converges to a critical point of the problem. To illustrate the method, it is applied to some weighted eigenvalue problems, to a problem from astrophysics, and to some semilinear elliptic equations.