Performance and generalization of the classification figure of merit criterion function
- 1 March 1991
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 2 (2) , 322-325
- https://doi.org/10.1109/72.80345
Abstract
A criterion function-the classification figure of merit (CFM)-for training neural networks, introduced by J.B. Hampshire and A.H. Waibel (IEEE Trans. Neural Networks, vol. 1, pp. 216-218, June (1990)), is studied. It is shown that this criterion function has some highly desirable properties. CFM has optimal training-set performance, which is related (but not equivalent) to its monotonicity. However, there is no reason to expect generalization with this criterion function to be substantially better than that of the standard criterion functions. It is nonetheless preferable to use this criterion function because its ability to find classifiers which classify the training set well will also lead to improved test-set performance after training with a suitably detailed training set.Keywords
This publication has 4 references indexed in Scilit:
- A novel objective function for improved phoneme recognition using time-delay neural networksIEEE Transactions on Neural Networks, 1990
- Constructing a generalizer superior to NETtalk via a mathematical theory of generalizationNeural Networks, 1990
- A comparison between criterion functions for linear classifiers, with an application to neural netsIEEE Transactions on Systems, Man, and Cybernetics, 1989
- Universal approximation using feedforward networks with non-sigmoid hidden layer activation functionsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1989