A stochastic architecture for neural nets

Abstract
A stochastic digital architecture is described for simulating the operation of Hopfield neural networks. This architecture provides reprogrammability (since synaptic weights are stored in digital shift registers), large dynamic range (by using either fixed or floating-point weights), annealing (by coupling variable neuron gains with noise from stochastic arithmetic), high execution speeds ( approximately=N*10/sup 8/ connections per second), expandability (by cascading of multiple chips to host large networks), and practicality (by building with very conservative MOS device technologies).