On the ability of neural networks to perform generalization by induction
- 1 June 1989
- journal article
- research article
- Published by Springer Nature in Biological Cybernetics
- Vol. 61 (2) , 125-128
- https://doi.org/10.1007/bf00204596
Abstract
The ability of neural networks to perform generalization by induction is the ability to learn an algorithm without the benefit of complete information about it. We consider the properties of networks and algorithms that determine the efficiency of generalization. These properties are described in quantitative terms. The most effective generalization is shown to be achieved by networks with the least admissible capacity. General conclusions are illustrated by computer simulations for a three-layered neural network. We draw a quantitative comparison between the general equations and specific results reported here and elsewhere.Keywords
This publication has 12 references indexed in Scilit:
- A logical calculus of the ideas immanent in nervous activity. 1943.1990
- Network model of shape-from-shading: neural function arises from both receptive and projective fieldsNature, 1988
- Perception of left and right by a feed forward netBiological Cybernetics, 1988
- A back-propagation programmed network that simulates response properties of a subset of posterior parietal neuronsNature, 1988
- Learning from a computer catNature, 1988
- Exhaustive Thermodynamical Analysis of Boolean Learning NetworksEurophysics Letters, 1987
- Learning Networks of Neurons with Boolean LogicEurophysics Letters, 1987
- Optimization by Simulated AnnealingScience, 1983
- Associative MemoryPublished by Springer Nature ,1977
- A logical calculus of the ideas immanent in nervous activityBulletin of Mathematical Biology, 1943