Abstract
Learning in a specific type of multilayer network referred to as a K=2 parity machine is studied in the limit that both the system size N and the number of examples m become infinite while the ratio alpha =m/N remains finite. The machine consists of K=2 hidden units with non-overlapping receptive fields each of size N/2. The output is the sign of the product of the two hidden units for each input We investigate incremental learning by empirically using a least-action algorithm in the following two learning paradigms. In the first, it is assumed that each example is transmitted perfectly to a student. We show that an ability to generalize emerges as the rescaled length of the connection vector l reaches a critical value lc. Further, we show that a student can identify the target exactly in the limit alpha to infinity , where the prediction error epsilon decreases to zero as epsilon approximately 0.441 alpha -13/. In the second paradigm, we examine what happens if each teacher signal is reversed to the opposite sign at a noise rate l. For small lambda , it is found that the prediction error converges to a finite value of O( square root lambda ) in O( lambda -32/) iterations. However, for a noise rate beyond a critical value lambda c approximately 0.175, the student cannot acquire any generalization ability even as alpha to infinity .