Abstract
The discrimination powers of multilayer perceptron (MLP) and learning vector quantization (LVQ) networks are compared for overlapping gaussian distributions. It is shown, both analytically and with Monte Carlo studies, that the MLP network handles high-dimensional problems in a more efficient way than LVQ. This is mainly due to the sigmoidal form of the MLP transfer function, but also to the fact that the MLP uses hyperplanes more efficiently. Both algorithms are equally robust to limited training sets and the learning curves fall off like 1/M, where M is the training set size, which is compared to theoretical predictions from statistical estimates and Vapnik-Chervonenkis bounds.