Learning of higher-order perceptrons with tunable complexities
- 25 September 1998
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 31 (38) , 7771-7784
- https://doi.org/10.1088/0305-4470/31/38/012
Abstract
We study learning from examples by higher-order perceptrons, which realize polynomially separable rules. The model complexities of the networks are made `tunable' by varying the relative orders of different monomial terms. We analyse the learning curves of higher-order perceptrons when the Gibbs algorithm is used for training. It is found that learning occurs in a stepwise manner. This is because the number of examples needed to constrain the corresponding phase-space component scales differently.Keywords
This publication has 13 references indexed in Scilit:
- Statistical mechanics of learning from examplesPhysical Review A, 1992
- Statistical mechanics of a multilayered neural networkPhysical Review Letters, 1990
- Learning from examples in large neural networksPhysical Review Letters, 1990
- On the ability of the optimal perceptron to generaliseJournal of Physics A: General Physics, 1990
- Neural networks with many-neuron interactionsJournal de Physique, 1990
- Asymmetric neural networks with multispin interactionsPhysical Review A, 1988
- The space of interactions in neural network modelsJournal of Physics A: General Physics, 1988
- Learning, invariance, and generalization in high-order neural networksApplied Optics, 1987
- The equivalence between discrete-spin Hamiltonians and Ising Hamiltonians with multi-spin interactionsJournal of Physics C: Solid State Physics, 1987
- Machine Learning Using a Higher Order Correlation NetworkPhysica D: Nonlinear Phenomena, 1986