The strength of weak learnability
- 1 January 1989
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that forms only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences.Keywords
This publication has 4 references indexed in Scilit:
- Equivalence of models for polynomial learnabilityInformation and Computation, 1991
- Crytographic limitations on learning Boolean formulae and finite automataPublished by Association for Computing Machinery (ACM) ,1989
- On the learnability of Boolean formulaePublished by Association for Computing Machinery (ACM) ,1987
- A theory of the learnableCommunications of the ACM, 1984