Optimal, Unsupervised Learning in Invariant Object Recognition
Open Access
- 1 May 1997
- journal article
- Published by MIT Press in Neural Computation
- Vol. 9 (4) , 883-894
- https://doi.org/10.1162/neco.1997.9.4.883
Abstract
A means for establishing transformation-invariant representations of objects is proposed and analyzed, in which different views are associated on the basis of the temporal order of the presentation of these views, as well as their spatial similarity. Assuming knowledge of the distribution of presentation times, an optimal linear learning rule is derived. Simulations of a competitive network trained on a character recognition task are then used to highlight the success of this learning rule in relation to simple Hebbian learning and to show that the theory can give accurate quantitative predictions for the optimal parameters for such networks.Keywords
This publication has 13 references indexed in Scilit:
- Neurophysiology of shape processingImage and Vision Computing, 1993
- Neurophysiological mechanisms underlying face processing within and beyond the temporal cortical visual areasPhilosophical Transactions Of The Royal Society B-Biological Sciences, 1992
- Psychophysical support for a two-dimensional view interpolation theory of object recognition.Proceedings of the National Academy of Sciences, 1992
- Coding visual images of objects in the inferotemporal cortex of the macaque monkeyJournal of Neurophysiology, 1991
- Learning Invariance from Transformation SequencesNeural Computation, 1991
- A self-organizing multiple-view representation of 3D objectsBiological Cybernetics, 1991
- Face-Selective Cells in the Temporal Cortex of MonkeysJournal of Cognitive Neuroscience, 1991
- A network that learns to recognize three-dimensional objectsNature, 1990
- Backpropagation Applied to Handwritten Zip Code RecognitionNeural Computation, 1989
- Neuronal correlate of visual associative long-term memory in the primate temporal cortexNature, 1988