Parallel Gradient Distribution in Unconstrained Optimization
- 1 November 1995
- journal article
- Published by Society for Industrial & Applied Mathematics (SIAM) in SIAM Journal on Control and Optimization
- Vol. 33 (6) , 1916-1925
- https://doi.org/10.1137/s0363012993250220
Abstract
A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of $k$ parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton, or conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the $k$ points found by the $k$ processors. A more general synchronization step, applicable to convex as well as nonconvex functions, consists of taking the best point found by the $k$ processors or any point that is better. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicates a speedup of the order of the number of processors employed.
Keywords
This publication has 8 references indexed in Scilit:
- Parallel Variable DistributionSIAM Journal on Optimization, 1994
- Serial and Parallel Multicategory DiscriminationSIAM Journal on Optimization, 1994
- Serial and parallel backpropagation convergence via nonmonotone perturbed minimizationOptimization Methods and Software, 1994
- On the convergence of the coordinate descent method for convex differentiable minimizationJournal of Optimization Theory and Applications, 1992
- Dual Ascent Methods for Problems with Strictly Convex Costs and Linear Constraints: A Unified ApproachSIAM Journal on Control and Optimization, 1990
- The conjugate gradient method in extremal problemsUSSR Computational Mathematics and Mathematical Physics, 1969
- Constrained minimization methodsUSSR Computational Mathematics and Mathematical Physics, 1966
- Minimizing Certain Convex FunctionsJournal of the Society for Industrial and Applied Mathematics, 1963