Learning Factorial Codes by Predictability Minimization
- 1 November 1992
- journal article
- Published by MIT Press in Neural Computation
- Vol. 4 (6) , 863-879
- https://doi.org/10.1162/neco.1992.4.6.863
Abstract
I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns. The principle is based on two opposing forces. For each representational unit there is an adaptive predictor which tries to predict the unit from the remaining units. In turn, each unit tries to react to the environment such that it minimizes its predictability. This encourages each unit to filter `abstract concepts" out of the environmental input such that...Keywords
This publication has 8 references indexed in Scilit:
- Learning Complex, Extended Sequences Using the Principle of History CompressionNeural Computation, 1992
- UNSUPERVISED LEARNING PROCEDURES FOR NEURAL NETWORKSInternational Journal of Neural Systems, 1991
- Forming sparse representations by local anti-Hebbian learningBiological Cybernetics, 1990
- Development of feature detectors by self-organizationBiological Cybernetics, 1990
- Finding Minimum Entropy CodesNeural Computation, 1989
- NEURAL NETWORKS, PRINCIPAL COMPONENTS, AND SUBSPACESInternational Journal of Neural Systems, 1989
- Self-organization in a perceptual networkComputer, 1988
- A Mathematical Theory of CommunicationBell System Technical Journal, 1948