An Efficient Method to Estimate Bagging's Generalization Error

    • preprint
    • Published in RePEc
Abstract
In bagging [Bre94a] one uses bootstrap replicates of the training set [Efr79, ET93] to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for leave-one-out cross-validation one needs to train the underlying algorithm on the order of $m\nu$ times, where m is the size of the training set and $\nu$ is the number of replicates. This paper presents several techniques for exploiting the bias-variance decomposition [GBD92, Wol96] to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm. The best of our estimators exploits stacking [Wol92]. In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithm's error and the cross-validation-based estimator of the underlying algorithm's error. This improvement was particularly pronounced for small test sests. This suggests a novel justification for using bagging---improved estimation of generalization error. Key words. machine learning, regression, bootstrap, bagging
All Related Versions

This publication has 0 references indexed in Scilit: