Recognition rates of the Hebb rule for learning Boolean functions

Abstract
We study the Hebb rule for learning several Boolean functions (random and linearly separable functions) defined on the hypercube of dimension N. Learning and generalization rates are derived in the N→∞ limit versus α=P/N, where P is the number of learned patterns. In the linearly separable case, the generalization rate grows monotonically from 1/2 to 1, whereas the learning rate first decreases from 1 to a minimum value, and then increases again towards 1. This result is interpreted as an interference phenomenon, like in the learning for associative memories implemented with the same rule. Comparisons are then made with the case of random Boolean functions, associative memories, and their clipped version. The behavior of the Hebb rule is decomposed in two distinct contributions, referred to as the ‘‘rote’’ and the ‘‘conceptual’’ learnings. Illustrative numerical simulations are given.