Sign-constrained linear learning and diluting in neural networks
- 1 May 1991
- journal article
- Published by IOP Publishing in Journal of Physics A: General Physics
- Vol. 24 (9) , L495-L502
- https://doi.org/10.1088/0305-4470/24/9/008
Abstract
For neural networks with predefined effects of the synapses, excitatory or inhibitory, the simplex algorithm is applied as a learning rule. It is assumed that the given signs of the synapses can never be changed during learning. The maximum possible dilution of synapses, as a result of the learning, is found at the maximum storage capacity of a model with only positive or randomly distributed signs. For the case of infinitely many neurons, a replica symmetric calculation of the free energy and the distribution of coupling strengths is presented. The linear algorithm is also applied to networks with a more suitable choice of sign constraints, with the result of a higher storage capacity.Keywords
This publication has 12 references indexed in Scilit:
- The AdaTron: An Adaptive Perceptron AlgorithmEurophysics Letters, 1990
- The AdaTron: An Adaptive Perceptron AlgorithmEurophysics Letters, 1989
- The interaction space of neural networks with sign-constrained synapsesJournal of Physics A: General Physics, 1989
- Perceptron learning with sign-constrained weightsJournal of Physics A: General Physics, 1989
- Optimal storage properties of neural network modelsJournal of Physics A: General Physics, 1988
- The space of interactions in neural network modelsJournal of Physics A: General Physics, 1988
- Domains of attraction in neural networksJournal de Physique, 1988
- Maximum Storage Capacity in Neural NetworksEurophysics Letters, 1987
- Learning algorithms with optimal stability in neural networksJournal of Physics A: General Physics, 1987
- Pharmacology and Nerve-EndingsProceedings of the Royal Society of Medicine, 1935