Learning Invariance from Transformation Sequences
- 1 June 1991
- journal article
- Published by MIT Press in Neural Computation
- Vol. 3 (2) , 194-200
- https://doi.org/10.1162/neco.1991.3.2.194
Abstract
The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas.Keywords
This publication has 8 references indexed in Scilit:
- Self-organization of position- and deformation-tolerant neural representationsNetwork: Computation in Neural Systems, 1991
- Self-organization of position- and deformation-tolerant neural representationsNetwork: Computation in Neural Systems, 1991
- Forming sparse representations by local anti-Hebbian learningBiological Cybernetics, 1990
- Backpropagation Applied to Handwritten Zip Code RecognitionNeural Computation, 1989
- Feature Discovery by Competitive Learning*Cognitive Science, 1985
- Visual neurones responsive to faces in the monkey temporal cortexExperimental Brain Research, 1982
- Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in positionBiological Cybernetics, 1980
- Receptive fields, binocular interaction and functional architecture in the cat's visual cortexThe Journal of Physiology, 1962