Finite precision error analysis of neural network hardware implementations
- 1 March 1993
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Computers
- Vol. 42 (3) , 281-290
- https://doi.org/10.1109/12.210171
Abstract
Through parallel processing, low precision fixed point hardware can be used to build a very high speed neural network computing engine where the low precision results in a drastic reduction in system cost. The reduced silicon area required to implement a single processing unit is taken advantage of by implementing multiple processing units on a single piece of silicon and operating them in parallel. The important question which arises is how much precision is required to implement neural network algorithms on this low precision hardware. A theoretical analysis of error due to finite precision computation was undertaken to determine the necessary precision for successful forward retrieving and back-propagation learning in a multilayer perceptron. This analysis can easily be further extended to provide a general finite precision analysis technique by which most neural network algorithms under any set of hardware constraints may be evaluated.Keywords
This publication has 5 references indexed in Scilit:
- Back propagation simulations using limited precision calculationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Finite precision error analysis of neural network electronic hardware implementationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- A VLSI architecture for high-performance, low-cost, on-chip learningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,1990
- A systolic neural network architecture for hidden Markov modelsIEEE Transactions on Acoustics, Speech, and Signal Processing, 1989
- Parallel Distributed ProcessingPublished by MIT Press ,1986