Efficient distribution-free learning of probabilistic concepts
- 4 December 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 382-391 vol.1
- https://doi.org/10.1109/fscs.1990.89557
Abstract
A model of machine learning in which the concept to be learned may exhibit uncertain or probabilistic behavior is investigated. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. It is required that learning algorithms be both efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. Many efficient algorithms for learning natural classes of p-concepts are given, and an underlying theory of learning p-concepts is developed in detail.Keywords
This publication has 13 references indexed in Scilit:
- A learning criterion for stochastic rulesMachine Learning, 1992
- Learnability and the Vapnik-Chervonenkis dimensionJournal of the ACM, 1989
- Crytographic limitations on learning Boolean formulae and finite automataPublished by Association for Computing Machinery (ACM) ,1989
- Computational limitations on learning from examplesJournal of the ACM, 1988
- Learning from noisy examplesMachine Learning, 1988
- Learning in the presence of malicious errorsPublished by Association for Computing Machinery (ACM) ,1988
- Learning decision listsMachine Learning, 1987
- Occam's RazorInformation Processing Letters, 1987
- A theory of the learnableCommunications of the ACM, 1984
- Central Limit Theorems for Empirical MeasuresThe Annals of Probability, 1978