Learning in linear systolic neural network engines: analysis and implementation
- 1 July 1994
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 5 (4) , 584-593
- https://doi.org/10.1109/72.298228
Abstract
Linear systolic processor arrays are a widely proposed digital architecture for neural networks. This paper reports the analysis of a range of training algorithms implemented on a linear systolic ring, with a view to (a) identifying low-level instruction requirements, (b) assessing different hardware structures for PE implementation and (c) evaluating the impact of different array controller designs. Quantitative data is derived and used to determine cost-effective PE and controller hardware constructsKeywords
This publication has 10 references indexed in Scilit:
- Digital VLSI architectures for neural networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Extensible linear floating point SIMD neurocomputer array processorPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- HANNIBAL: A VLSI building block for neural networks with on-chip backpropagation learningNeurocomputing, 1993
- Implementing nonlinear activation functions in neural network emulatorsElectronics Letters, 1991
- Artificial neural network and image processing using the Adaptive Solutions' architecturePublished by SPIE-Intl Soc Optical Eng ,1991
- A Delay-Insensitive Neural Network EnginePublished by Springer Nature ,1991
- Toroidal Neural Network: Architecture and Processor Granularity IssuesPublished by Springer Nature ,1991
- Bidirectional associative memoriesIEEE Transactions on Systems, Man, and Cybernetics, 1988
- Parallel Distributed ProcessingPublished by MIT Press ,1986
- Neural networks and physical systems with emergent collective computational abilities.Proceedings of the National Academy of Sciences, 1982