Backpropagation in linear arrays-a performance analysis and optimization
- 1 May 1995
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 6 (3) , 583-595
- https://doi.org/10.1109/72.377965
Abstract
Neural networks are valuable tools for the support of a wide range of image processing applications. For video-rate operation, special-purpose parallel hardware is often necessary. One of the most common architectures used for this purpose is the linear systolic array. The design and implementation of multi-layer neural networks in linear systolic arrays can be complex, however. This paper demonstrates that the smallest network is not necessarily the best in terms of learning or recall times. Furthermore, this paper shows that the manner in which networks are mapped into a particular hardware structure affects both the performance of the application and the efficiency with which the hardware resources are used. We analyze and identify how to best structure neural networks to optimize network performance for throughput, latency and the efficiency with which the hardware is used. We use the HANNIBAL neural network processor as a research vehicle for these investigations and demonstrate the value of the proposed techniques by a number of example applications.Keywords
This publication has 9 references indexed in Scilit:
- A unifying algorithm/architecture for artificial neural networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- The RAP: a ring array processor for layered network calculationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Learning in linear systolic neural network engines: analysis and implementationIEEE Transactions on Neural Networks, 1994
- HANNIBAL: A VLSI building block for neural networks with on-chip backpropagation learningNeurocomputing, 1993
- Finite Wordlength, Integer Arithmetic Multilayer Perceptron Modelling for Hardware RealizationPublished by Springer Nature ,1992
- A Highly Parallel Digital Architecture for Neural Network EmulationPublished by Springer Nature ,1991
- An analysis on the performance of silicon implementations of backpropagation algorithms for artificial neural networksIEEE Transactions on Computers, 1991
- Efficient implementation of piecewise linear activation function for digital VLSI neural networksElectronics Letters, 1989
- An algebraic projection analysis for optimal hidden units size and learning rates in back-propagation learningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1988