Boosting With theL2Loss
Top Cited Papers
- 1 June 2003
- journal article
- Published by Taylor & Francis in Journal of the American Statistical Association
- Vol. 98 (462) , 324-339
- https://doi.org/10.1198/016214503000125
Abstract
This article investigates a computationally simple variant of boosting, L2Boost, which is constructed from a functional gradient descent algorithm with the L2-loss function. Like other boosting algorithms, L2Boost uses many times in an iterative fashion a prechosen fitting method, called the learner. Based on the explicit expression of refitting of residuals of L2Boost, the case with (symmetric) linear learners is studied in detail in both regression and classification. In particular, with the boosting iteration m working as the smoothing or regularization parameter, a new exponential bias-variance trade-off is found with the variance (complexity) term increasing very slowly as m tends to infinity. When the learner is a smoothing spline, an optimal rate of convergence result holds for both regression and classification and the boosted smoothing spline even adapts to higher-order, unknown smoothness. Moreover, a simple expansion of a (smoothed) 0–1 loss function is derived to reveal the importance of the d...Keywords
This publication has 2 references indexed in Scilit:
- Prediction Games and Arcing AlgorithmsNeural Computation, 1999
- Arcing classifier (with discussion and a rejoinder by the author)The Annals of Statistics, 1998