On the Bayes-risk consistency of regularized boosting methods
Open Access
- 1 February 2004
- journal article
- Published by Institute of Mathematical Statistics in The Annals of Statistics
- Vol. 32 (1) , 30-55
- https://doi.org/10.1214/aos/1079120129
Abstract
The probability of error of classification methods based on convex combinations of simple base classifiers by "boosting" algorithms is investigated. The main result of the paper is that certain regularized boosting algorithms provide Bayes-risk consistent classifiers under the sole assumption that the Bayes classifier may be approximated by a convex combination of the base classifiers. Nonasymptotic distribution-free bounds are also developed which offer interesting new insight into how boosting works and help explain its success in practical classification problems.Keywords
This publication has 19 references indexed in Scilit:
- Statistical behavior and consistency of classification methods based on convex risk minimizationThe Annals of Statistics, 2004
- Process consistency for AdaBoostThe Annals of Statistics, 2004
- Empirical Margin Distributions and Bounding the Generalization Error of Combined ClassifiersThe Annals of Statistics, 2002
- Additive logistic regression: a statistical view of boosting (With discussion and a rejoinder by the authors)The Annals of Statistics, 2000
- Boosting the margin: a new explanation for the effectiveness of voting methodsThe Annals of Statistics, 1998
- Arcing classifier (with discussion and a rejoinder by the author)The Annals of Statistics, 1998
- A Decision-Theoretic Generalization of On-Line Learning and an Application to BoostingJournal of Computer and System Sciences, 1997
- Bagging predictorsMachine Learning, 1996
- Boosting a Weak Learning Algorithm by MajorityInformation and Computation, 1995
- The strength of weak learnabilityMachine Learning, 1990