Abstract
It is pointed out that in L.G. Valiant's learning model (Commun. ACM, vol.27, p.1134-42, 1984) many concepts turn out to be too hard to learn, whereas in practice, almost nothing we care to learn appears to be not learnable. To model the intuitive notion of learning more closely, it is assumed that learning happens under an arbitrary distribution, rather than under an arbitrary simple distribution, as assumed by Valiant. A distribution is called simple if it is dominated by a semicomputable distribution. A general theory of learning under simple distributions is developed. In particular, it is shown that one can learn under all simple distributions if one can learn under one fixed simple distribution, called the universal distribution. Interesting learning algorithms and several quite general new learnable classes are presented. It is shown that for essentially all algorithms, if the inputs are distributed according to the universal distribution, then the average-case complexity is of the same order of magnitude as the worst-case complexity.

This publication has 13 references indexed in Scilit: