No Free Lunch for Early Stopping
Open Access
- 1 May 1999
- journal article
- Published by MIT Press in Neural Computation
- Vol. 11 (4) , 995-1009
- https://doi.org/10.1162/089976699300016557
Abstract
We show that with a uniform prior on models having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error.Keywords
This publication has 9 references indexed in Scilit:
- Asymptotic statistical theory of overtraining and cross-validationIEEE Transactions on Neural Networks, 1997
- Note on Free Lunches and Cross-ValidationNeural Computation, 1997
- No Free Lunch for Cross-ValidationNeural Computation, 1996
- The Existence of A Priori Distinctions Between Learning AlgorithmsNeural Computation, 1996
- The Lack of A Priori Distinctions Between Learning AlgorithmsNeural Computation, 1996
- Overtraining, regularization and searching for a minimum, with application to neural networksInternational Journal of Control, 1995
- Learning from HintsJournal of Complexity, 1994
- Pruning algorithms-a surveyIEEE Transactions on Neural Networks, 1993
- Temporal Evolution of Generalization during Learning in Linear NetworksNeural Computation, 1991