Simulating artificial neural networks on parallel architectures
- 1 March 1996
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in Computer
- Vol. 29 (3) , 56-63
- https://doi.org/10.1109/2.485893
Abstract
Neural computation means organizing processing into a number of processing elements that are massively interconnected and that exchange signals. Processing within elements usually involves adding weighted input values, applying a (non)linear function to the input sum, and forwarding the result to other elements. Since the basic principle of neurocomputation is learning by example, such processing must be repeated again and again, with weights being changed until a network learns the problem. An artificial neural network can be implemented as a simulation programmed on a general-purpose computer or as an emulation realized on special-purpose hardware. Although sequential simulations are widespread and offer comfortable software environments for developing and analyzing neural networks, the computational needs of realistic applications exceed the capabilities of sequential computers. Parallelization was therefore necessary to cope with the high computational and communication demands of neuro-applications. As matrix-vector operations are at the core of many neuroalgorithms, processing is often organized in such a way as to ensure their efficient implementation. The first implementations were exercised on general-purpose parallel machines. When they approached the performance limits of standard supercomputers, the research focus shifted to architectural improvements. One approach was to build general-purpose programmable neurohardware; another was to construct special-purpose neurohardware that emulates a particular neuromodel. This article discusses techniques and means for parallelizing neurosimulations, both at a high programming level and at a low hardware-emulation level.Keywords
This publication has 9 references indexed in Scilit:
- Optical neural chipsIEEE Micro, 1994
- A systolic array exploiting the inherent parallelisms of artificial neural networksMicroprocessing and Microprogramming, 1992
- Lneuro 1.0: a piece of hardware LEGO for building neural network systemsIEEE Transactions on Neural Networks, 1992
- SYNAPSE—A neurocomputer that synthesizes neural algorithms on a parallel systolic engineJournal of Parallel and Distributed Computing, 1992
- Using and designing massively parallel computers for artificial neural networksJournal of Parallel and Distributed Computing, 1992
- The ring array processor: A multiprocessing peripheral for connectionist applicationsJournal of Parallel and Distributed Computing, 1992
- Simulation of backpropagation networks on transputers: Suitable topologies and performance analysisNeurocomputing, 1991
- The backpropagation algorithm on grid and hypercube architecturesParallel Computing, 1990
- Neural network simulation at Warp speed: how we got 17 million connections per secondPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988