Deep, Big, Simple Neural Nets for Handwritten Digit Recognition
Top Cited Papers
- 1 December 2010
- journal article
- Published by MIT Press in Neural Computation
- Vol. 22 (12) , 3207-3220
- https://doi.org/10.1162/neco_a_00052
Abstract
Good old online backpropagation for plain multilayer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark. All we need to achieve this best result so far are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning.Keywords
All Related Versions
This publication has 9 references indexed in Scilit:
- Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2007
- A trainable feature extractor for handwritten digit recognitionPattern Recognition, 2007
- To recognize shapes, first learn to generate imagesPublished by Elsevier ,2007
- Reducing the Dimensionality of Data with Neural NetworksScience, 2006
- Best practices for convolutional neural networks applied to visual document analysisPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- Training Invariant Support Vector MachinesMachine Learning, 2002
- Gradient-based learning applied to document recognitionProceedings of the IEEE, 1998
- Long Short-Term MemoryNeural Computation, 1997
- Parallel Distributed ProcessingPublished by MIT Press ,1986