Algorithms for parallel boosting
- 22 March 2006
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 368-373
- https://doi.org/10.1109/icmla.2005.8
Abstract
We present several algorithms that combine many base learners trained on different distributions of the data, but allow some of the base learners to be trained simultaneously by separate processors. Our algorithms train batches of base classifiers using distributions that can be generated in advance of the training process. We propose several heuristic methods that produce a group of useful distributions based on the performance of the classifiers in the previous batch. We present experimental evidence that suggest that two of our algorithms are able to produce classifiers as accurate as the corresponding Adaboost classifier with the same number of base learners, but with a greatly reduced computation time.Keywords
This publication has 8 references indexed in Scilit:
- Boosting Algorithms for Parallel and Distributed LearningDistributed and Parallel Databases, 2002
- Smooth Boosting and Learning with Malicious NoisePublished by Springer Nature ,2001
- Boosting as entropy projectionPublished by Association for Computing Machinery (ACM) ,1999
- Training a Sigmoidal Node Is HardNeural Computation, 1999
- Improved Boosting Algorithms Using Confidence-rated PredictionsMachine Learning, 1999
- Arcing classifier (with discussion and a rejoinder by the author)The Annals of Statistics, 1998
- A Decision-Theoretic Generalization of On-Line Learning and an Application to BoostingJournal of Computer and System Sciences, 1997
- Bagging predictorsMachine Learning, 1996