Reducing the Dimensionality of Data with Neural Networks
Top Cited Papers
- 28 July 2006
- journal article
- other
- Published by American Association for the Advancement of Science (AAAS) in Science
- Vol. 313 (5786) , 504-507
- https://doi.org/10.1126/science.1127647
Abstract
High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.Keywords
This publication has 8 references indexed in Scilit:
- A Fast Learning Algorithm for Deep Belief NetsNeural Computation, 2006
- Nonlinear Dimensionality Reduction by Locally Linear EmbeddingScience, 2000
- A Global Geometric Framework for Nonlinear Dimensionality ReductionScience, 2000
- Dimension Reduction by Local Principal Component AnalysisNeural Computation, 1997
- Replicator Neural Networks for Universal Optimal Source CodingScience, 1995
- Indexing by latent semantic analysisJournal of the American Society for Information Science, 1990
- Learning sets of filters using back-propagationComputer Speech & Language, 1987
- Neural networks and physical systems with emergent collective computational abilities.Proceedings of the National Academy of Sciences, 1982