Learning Boolean formulas
- 1 November 1994
- journal article
- Published by Association for Computing Machinery (ACM) in Journal of the ACM
- Vol. 41 (6) , 1298-1328
- https://doi.org/10.1145/195613.195656
Abstract
Efficient distribution-free learning of Boolean formulas from positive and negative examples is considered. It is shown that classes of formulas that are efficiently learnable from only positive examples or only negative examples have certain closure properties. A new substitution technique is used to show that in the distribution-free case learning DNF (disjunctive normal form formulas) is no harder than learning monotone DNF. We prove that monomials cannot be efficiently learned from negative examples alone, even if the negative examples are uniformly distributed. It is also shown that, if the examples are drawn from uniform distributions, then the class of DNF in which each variable occurs at most once is efficiently weakly learnable (i.e., individual examples are correctly classified with a probability larger than 1/2 + 1/ p , where p is a polynomial in the relevant parameters of the learning problem). We then show an equivalence between the notion of weak learning and the notion of group learning , where a group of examples of polynomial size, either all positive or all negative, must be correctly classified with high probability.Keywords
This publication has 11 references indexed in Scilit:
- The strength of weak learnabilityMachine Learning, 1990
- A necessary condition for learning from positive examplesMachine Learning, 1990
- The Strength of Weak LearnabilityMachine Learning, 1990
- Computational limitations on learning from examplesJournal of the ACM, 1988
- Quantifying inductive bias: AI learning algorithms and Valiant's learning frameworkArtificial Intelligence, 1988
- Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold AlgorithmMachine Learning, 1988
- Learning decision listsMachine Learning, 1987
- Recent Results on Boolean Concept LearningPublished by Elsevier ,1987
- A theory of the learnableCommunications of the ACM, 1984
- Fast probabilistic algorithms for hamiltonian circuits and matchingsJournal of Computer and System Sciences, 1979