Neural Network Classifiers Estimate Bayesian a posteriori Probabilities
- 1 December 1991
- journal article
- Published by MIT Press in Neural Computation
- Vol. 3 (4) , 461-483
- https://doi.org/10.1162/neco.1991.3.4.461
Abstract
Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 1 of M (one output unity, all others zero) and a squared-error or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and a priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.Keywords
This publication has 6 references indexed in Scilit:
- Convergence of back-propagation in neural networks using a log-likelihood cost functionElectronics Letters, 1990
- A novel objective function for improved phoneme recognition using time-delay neural networksIEEE Transactions on Neural Networks, 1990
- The multilayer perceptron as an approximation to a Bayes optimal discriminant functionIEEE Transactions on Neural Networks, 1990
- Neural network classification: a Bayesian interpretationIEEE Transactions on Neural Networks, 1990
- Learning in Artificial Neural Networks: A Statistical PerspectiveNeural Computation, 1989
- Learning algorithms and probability distributions in feed-forward and feed-back networksProceedings of the National Academy of Sciences, 1987