Fixed-weight on-line learning
- 1 March 1999
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 10 (2) , 272-283
- https://doi.org/10.1109/72.750553
Abstract
Conventional artificial neural networks perform functional mappings from their input space to their output space. The synaptic weights encode information about the mapping in a manner analogous to long-term memory in biological systems. This paper presents a method of designing neural networks where recurrent signal loops store this knowledge in a manner analogous to short-term memory. The synaptic weights of these networks encode a learning algorithm. This gives these networks the ability to dynamically learn any functional mapping from a (possibly very large) set, without changing any synaptic weights. These networks are adaptive dynamic systems. Learning is online continually taking place as part of the network's overall behavior instead of a separate, externally driven process. We present four higher order fixed-weight learning networks. Two of these networks have standard backpropagation embedded in their synaptic weights. The other two utilize a more efficient gradient-descent-based learning rule. This new learning scheme was discovered by examining variations in fixed-weight topology. We present empirical tests showing that all these networks were able to successfully learn functions from both discrete (Boolean) and continuous function sets. Largely, the networks were robust with respect to perturbations in the synaptic weights. The exception was the recurrent connections used to store information. These required a tight tolerance of 0.5%. We found that the cost of these networks scaled approximately in proportion to the total number of synapses. We consider evolving fixed weight networks tailored to a specific problem class by analyzing the meta-learning cost surface of the networks presented.Keywords
This publication has 14 references indexed in Scilit:
- Learning algorithms and fixed dynamicsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- How Dependencies between Successive Examples Affect On-Line LearningNeural Computation, 1996
- Neural network classification and formalizationComputer Standards & Interfaces, 1994
- Universal Approximation by Phase Series and Fixed-Weight NetworksNeural Computation, 1993
- A Learning Algorithm for Continually Running Fully Recurrent Neural NetworksNeural Computation, 1989
- Generalization of back-propagation to recurrent neural networksPhysical Review Letters, 1987
- Learning by InstinctScientific American, 1987
- A self-optimizing, nonsymmetrical neural net for content addressable memory and pattern recognitionPhysica D: Nonlinear Phenomena, 1986
- Programming a massively parallel, computation universal system: Static behaviorAIP Conference Proceedings, 1986
- A logical calculus of the ideas immanent in nervous activityBulletin of Mathematical Biology, 1943