Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks
- 1 January 1992
- journal article
- research article
- Published by Taylor & Francis in Connection Science
- Vol. 4 (3-4) , 365-377
- https://doi.org/10.1080/09540099208946624
Abstract
A major problem with connectionist networks is that newly-learned information may completely destroy previously-learned information unless the network is continually retrained on the old information. This phenomenon, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is in part the result of the overlap of system's distributed representations and can be reduced by reducing this overlap. A simple algorithm, called activation sharpening, is presented that allows a standard feed-forward backpropagation network to develop semi-distributed representations, thereby reducing the problem of catastrophic forgetting. Activation sharpening is discussed in tight of recent work done by other researchers who have experimented with this and other techniques for reducing catastrophic forgetting.Keywords
This publication has 5 references indexed in Scilit:
- ALCOVE: As exemplar-based connectionist model of category learningPublished by American Psychological Association (APA) ,1991
- The self-organizing mapProceedings of the IEEE, 1990
- Connectionist models of recognition memory: Constraints imposed by learning and forgetting functions.Psychological Review, 1990
- Catastrophic Interference in Connectionist Networks: The Sequential Learning ProblemPsychology of Learning and Motivation, 1989
- Choice, similarity, and the context theory of classification.Journal of Experimental Psychology: Learning, Memory, and Cognition, 1984