Circular backpropagation networks for classification
- 1 January 1997
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 8 (1) , 84-97
- https://doi.org/10.1109/72.554194
Abstract
The class of mapping networks is a general family of tools to perform a wide variety of tasks. This paper presents a standardized, uniform representation for this class of networks, and introduces a simple modification of the multilayer perceptron with interesting practical properties, especially well suited to cope with pattern classification tasks. The proposed model unifies the two main representation paradigms found in the class of mapping networks for classification, namely, the surface-based and the prototype-based schemes, while retaining the advantage of being trainable by backpropagation. The enhancement in the representation properties and the generalization performance are assessed through results about the worst-case requirement in terms of hidden units and about the Vapnik-Chervonenkis dimension and cover capacity. The theoretical properties of the network also suggest that the proposed modification to the multilayer perceptron is in many senses optimal. A number of experimental verifications also confirm theoretical results about the model's increased performances, as compared with the multilayer perceptron and the Gaussian radial basis functions network.Keywords
This publication has 30 references indexed in Scilit:
- On the Relationship between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis FunctionsNeural Computation, 1996
- Generalization and PAC learning: some new results for the class of generalized single-layer networksIEEE Transactions on Neural Networks, 1995
- Wavelet networksIEEE Transactions on Neural Networks, 1992
- Bounds on the number of hidden neurons in multilayer perceptronsIEEE Transactions on Neural Networks, 1991
- Networks for approximation and learningProceedings of the IEEE, 1990
- Sigmoids Distinguish More Efficiently Than HeavisidesNeural Computation, 1989
- Fast Learning in Networks of Locally-Tuned Processing UnitsNeural Computation, 1989
- On hidden nodes for neural netsIEEE Transactions on Circuits and Systems, 1989
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- Random number generators: good ones are hard to findCommunications of the ACM, 1988