Moderating the outputs of support vector machine classifiers
- 1 September 1999
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 10 (5) , 1018-1031
- https://doi.org/10.1109/72.788642
Abstract
In this paper, we extend the use of moderated outputs to the support vector machine (SVM) by making use of a relationship between SVM and the evidence framework. The moderated output is more in line with the Bayesian idea that the posterior weight distribution should be taken into account upon prediction, and it also alleviates the usual tendency of assigning overly high confidence to the estimated class memberships of the test patterns. Moreover, the moderated output derived here can be taken as an approximation to the posterior class probability. Hence, meaningful rejection thresholds can be assigned and outputs from several networks can be directly compared. Experimental results on both artificial and real-world data are also discussed.Keywords
This publication has 18 references indexed in Scilit:
- Training support vector machines: an application to face detectionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- A review of Bayesian neural networks with an application to near infrared spectroscopyIEEE Transactions on Neural Networks, 1996
- Support-vector networksMachine Learning, 1995
- Probable networks and plausible predictions — a review of practical Bayesian methods for supervised neural networksNetwork: Computation in Neural Systems, 1995
- The Nature of Statistical Learning TheoryPublished by Springer Nature ,1995
- Bayesian TheoryPublished by Wiley ,1994
- The Evidence Framework Applied to Classification NetworksNeural Computation, 1992
- A Practical Bayesian Framework for Backpropagation NetworksNeural Computation, 1992
- Bayesian InterpolationNeural Computation, 1992
- Statistical Decision Theory and Bayesian AnalysisPublished by Springer Nature ,1985