The Lack of A Priori Distinctions Between Learning Algorithms
- 1 October 1996
- journal article
- Published by MIT Press in Neural Computation
- Vol. 8 (7) , 1341-1390
- https://doi.org/10.1162/neco.1996.8.7.1341
Abstract
This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected OTS error than B as vice versa, for loss functions like zero-one loss. In particular, this is true if A is cross-validation and B is “anti-cross-validation” (choose the learning algorithm with largest cross-validation error). This paper ends with a discussion of the implications of these results for computational learning theory. It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small. Other implications for “membership queries” algorithms and “punting” algorithms are also discussed.Keywords
This publication has 5 references indexed in Scilit:
- Local Algorithms for Pattern Recognition and Dependencies EstimationNeural Computation, 1993
- Overfitting avoidance as biasMachine Learning, 1993
- Machine LearningAnnual Review of Computer Science, 1990
- The strength of weak learnabilityMachine Learning, 1990
- Occam's RazorInformation Processing Letters, 1987