Pattern Discrimination Using Feedforward Networks: A Benchmark Study of Scaling Behavior
- 1 May 1993
- journal article
- Published by MIT Press in Neural Computation
- Vol. 5 (3) , 483-491
- https://doi.org/10.1162/neco.1993.5.3.483
Abstract
The discrimination powers of multilayer perceptron (MLP) and learning vector quantization (LVQ) networks are compared for overlapping gaussian distributions. It is shown, both analytically and with Monte Carlo studies, that the MLP network handles high-dimensional problems in a more efficient way than LVQ. This is mainly due to the sigmoidal form of the MLP transfer function, but also to the fact that the MLP uses hyperplanes more efficiently. Both algorithms are equally robust to limited training sets and the learning curves fall off like 1/M, where M is the training set size, which is compared to theoretical predictions from statistical estimates and Vapnik-Chervonenkis bounds.Keywords
This publication has 7 references indexed in Scilit:
- Pattern recognition in high energy physics with artificial neural networks — JETNET 2.0Computer Physics Communications, 1992
- Combined neural network — QCD classifier for quark and gluon jet separationNuclear Physics B, 1992
- How Tight Are the Vapnik-Chervonenkis Bounds?Neural Computation, 1992
- Neural Network Classifiers Estimate Bayesian a posteriori ProbabilitiesNeural Computation, 1991
- Exhaustive LearningNeural Computation, 1990
- Explorations of the mean field theory learning algorithmNeural Networks, 1989
- Statistical pattern recognition with neural networks: benchmarking studiesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988