Learnability and the Vapnik-Chervonenkis dimension
- 1 October 1989
- journal article
- Published by Association for Computing Machinery (ACM) in Journal of the ACM
- Vol. 36 (4) , 929-965
- https://doi.org/10.1145/76359.76371
Abstract
Valiant's learnability model is extended to learning classes of concepts defined by regions in Euclidean space E n . The methods in this paper lead to a unified treatment of some of Valiant's results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.Keywords
This publication has 30 references indexed in Scilit:
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- Computational limitations on learning from examplesJournal of the ACM, 1988
- Quantifying inductive bias: AI learning algorithms and Valiant's learning frameworkArtificial Intelligence, 1988
- Automatic pattern recognition: a study of the probability of errorPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988
- Universal Donsker Classes and Metric EntropyThe Annals of Probability, 1987
- How to construct random functionsJournal of the ACM, 1986
- A theory of the learnableCommunications of the ACM, 1984
- Linear Programming in Linear Time When the Dimension Is FixedJournal of the ACM, 1984
- Fast probabilistic algorithms for hamiltonian circuits and matchingsJournal of Computer and System Sciences, 1979
- ON THE CONNECTION BETWEEN THE COMPLEXITY AND CREDIBILITY OF INFERRED MODELSInternational Journal of General Systems, 1978