Unsupervised dynamic learning in layered neural networks
- 7 September 1991
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 24 (17) , 4219-4234
- https://doi.org/10.1088/0305-4470/24/17/032
Abstract
The authors consider a stochastic two-layer neural network of binary neurons in which the connections between the layers are updated according to the Hebb rule, whereas the lateral connections in the output layer are modified according to an anti-Hebb rule. In equilibrium the output overlap is found to be a linear transformation of the input overlap. Next they extend the model by considering learning as a dynamic process, which means that synaptic efficacies as well as neuronal states may vary in time. Despite the coupling of these two variables, they show that in this particular model the behaviour can be well analysed. It turns out that the network filters the information available at the input in such a way that important components of the input data can pass through, whereas components with a low information content are suppressed.Keywords
This publication has 13 references indexed in Scilit:
- A Learning Algorithm for Boltzmann Machines*Published by Wiley ,2010
- Development of feature detectors by self-organizationBiological Cybernetics, 1990
- Hebbian Synapses: Biophysical Mechanisms And AlgorithmsAnnual Review of Neuroscience, 1990
- Memory maintenance in neural networksJournal of Physics A: General Physics, 1987
- Nonlinear Neural NetworksPhysical Review Letters, 1986
- A memory which forgetsJournal of Physics A: General Physics, 1986
- Solvable models of working memoriesJournal de Physique, 1986
- Spin-glass models of neural networksPhysical Review A, 1985
- ‘Unlearning’ has a stabilizing effect in collective memoriesNature, 1983
- Neural networks and physical systems with emergent collective computational abilities.Proceedings of the National Academy of Sciences, 1982