Self-organisation of transformation-invariant detectors for constituents of perceptual patterns
- 1 November 1994
- journal article
- Published by Taylor & Francis in Network: Computation in Neural Systems
- Vol. 5 (4) , 471-496
- https://doi.org/10.1088/0954-898x/5/4/004
Abstract
A geometrical interpretation of the elementary constituents which make up perceptual patterns is proposed: if a number of different pattern-vectors lie approximately within the same plane in the pattern-vector space, those patterns can be interpreted as sharing a common constituent. Individual constituents are associated with individual planes of patterns: a pattern lying within an intersection of several such planes corresponds to a combination of several constituents. This interpretation can model patterns as hierarchical combinations of constituents that are themselves combinations of yet more elementary constituents. A neuron can develop transformation-invariances in its recognition-response by aligning its synaptic vector with one of the plane-normals: a pattern-vector's projection along the synaptic vector is then an invariant of all the patterns on the plane. In this way, discriminating detectors for individual constituents can self-organise through Hebbian adaptation. Transformation-invariances that can self-organise in multiple-level vision systems include shape-tolerance and local position-tolerance. These principles are illustrated with demonstrations of transformation-tolerant face- recognition. 1. Transformation-invariant neural response 1.1. Geometrical interpretation of invariance This article interprets invariances of perceptual neurons' pattern-responses in terms of the geometry of the space of pattern-vectors: the orientation of a neuron's synaptic weight-vector with respect to the distribution of pattern-vectors determines how its response varies from pattern to pattern, and so determines for which patterns and under which transformations the response remains the same. 1.2. Pattern-vectors and their frequency-distribution Neural perception-systems are assemblies of perception-units: neurons which receive their input information as a collection of variable activities at their input synapses. Neurons at the lowest level of a perception-system receive their inputs from an array of sensors such as photoreceptors, while those at higher levels receive input-synapses from other perception- units.Keywords
This publication has 7 references indexed in Scilit:
- Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in positionPublished by Elsevier ,2003
- Self-organization of position- and deformation-tolerant neural representationsNetwork: Computation in Neural Systems, 1991
- Relations between the statistics of natural images and the response properties of cortical cellsJournal of the Optical Society of America A, 1987
- Feature Discovery by Competitive Learning*Cognitive Science, 1985
- Simplified neuron model as a principal component analyzerJournal of Mathematical Biology, 1982
- Analysis of recursive stochastic algorithmsIEEE Transactions on Automatic Control, 1977
- Single Units and Sensation: A Neuron Doctrine for Perceptual Psychology?Perception, 1972