Learning of higher-order perceptrons with tunable complexities

Abstract
We study learning from examples by higher-order perceptrons, which realize polynomially separable rules. The model complexities of the networks are made `tunable' by varying the relative orders of different monomial terms. We analyse the learning curves of higher-order perceptrons when the Gibbs algorithm is used for training. It is found that learning occurs in a stepwise manner. This is because the number of examples needed to constrain the corresponding phase-space component scales differently.

This publication has 13 references indexed in Scilit: