Averaging Regularized Estimators
- 1 July 1997
- journal article
- Published by MIT Press in Neural Computation
- Vol. 9 (5) , 1163-1178
- https://doi.org/10.1162/neco.1997.9.5.1163
Abstract
We compare the performance of averaged regularized estimators. We show that the improvement in performance which can be achieved by averaging depends critically on the degree of regulariza- tion which is used in training the individual estimators. We com- pare four difierent averaging approaches: simple averaging, bag- ging, variance-based weighting and variance-based bagging. In any of the averaging methods the greatest degree of improvement |if compared to the individual estimators| is achieved if no or only a small degree of regularization is used. Here, variance-based weight- ing and variance-based bagging are superior to simple averaging or bagging. Our experiments indicate that better performance for both individual estimators and for averaging is achieved in combi- nation with regularization. With increasing degrees of regulariza- tion, the two bagging-based approaches (bagging, variance-based bagging) outperform the individual estimators, simple averaging, as well as variance-based weighting. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.Keywords
This publication has 2 references indexed in Scilit:
- Methods For Combining Experts' Probability AssessmentsNeural Computation, 1995
- Stacked generalizationNeural Networks, 1992