Abstract
The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that forms only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences.

This publication has 4 references indexed in Scilit: