On the convergence of the decomposition method for support vector machines
Top Cited Papers
- 1 November 2001
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 12 (6) , 1288-1298
- https://doi.org/10.1109/72.963765
Abstract
The decomposition method is currently one of the major methods for solving support vector machines (SVM). Its convergence properties have not been fully understood. The general asymptotic convergence was first proposed by Chang et al. However, their working set selection does not coincide with existing implementation. A later breakthrough by Keerthi and Gilbert (2000, 2002) proved the convergence finite termination for practical cases while the size of the working set is restricted to two. In this paper, we prove the asymptotic convergence of the algorithm used by the software SVM/sup light/ and other later implementation. The size of the working set can be any even number. Extensions to other SVM formulations are also discussed.Keywords
This publication has 19 references indexed in Scilit:
- Training v-Support Vector Classifiers: Theory and AlgorithmsNeural Computation, 2001
- Improvements to Platt's SMO Algorithm for SVM Classifier DesignNeural Computation, 2001
- The analysis of decomposition methods for support vector machinesIEEE Transactions on Neural Networks, 2000
- New Support Vector AlgorithmsNeural Computation, 2000
- Improvements to the SMO algorithm for SVM regressionIEEE Transactions on Neural Networks, 2000
- 10.1162/15324430152733142Applied Physics Letters, 2000
- Successive overrelaxation for support vector machinesIEEE Transactions on Neural Networks, 1999
- Support-vector networksMachine Learning, 1995
- Interpolation of scattered data: Distance matrices and conditionally positive definite functionsConstructive Approximation, 1986
- On search directions for minimization algorithmsMathematical Programming, 1973