Learning in certainty-factor-based multilayer neural networks for classification
- 1 January 1998
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 9 (1) , 151-158
- https://doi.org/10.1109/72.655036
Abstract
The computational framework of rule-based neural networks inherits from the neural network and the inference engine of an expert system. In one approach, the network activation function is based on the certainty factor (CF) model of MYCIN-like systems. In this paper, it is shown theoretically that the neural network using the CF-based activation function requires relatively small sample sizes for correct generalization. This result is also confirmed by empirical studies in several independent domains.Keywords
This publication has 10 references indexed in Scilit:
- The effects of quantization on multilayer neural networksIEEE Transactions on Neural Networks, 1995
- Progress in supervised neural networksIEEE Signal Processing Magazine, 1993
- Knowledge-based connectionism for revising domain theoriesIEEE Transactions on Systems, Man, and Cybernetics, 1993
- Combining Connectionist and Symbolic Learning to Refine Certainty Factor Rule BasesConnection Science, 1993
- Back-propagation learning in expert networksIEEE Transactions on Neural Networks, 1992
- Learnability and the Vapnik-Chervonenkis dimensionJournal of the ACM, 1989
- What Size Net Gives Valid Generalization?Neural Computation, 1989
- A theory of the learnableCommunications of the ACM, 1984
- Some special vapnik-chervonenkis classesDiscrete Mathematics, 1981
- On the Uniform Convergence of Relative Frequencies of Events to Their ProbabilitiesTheory of Probability and Its Applications, 1971