The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study
- 2 January 2007
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Neural Networks
- Vol. 18 (1) , 240-252
- https://doi.org/10.1109/tnn.2006.883002
Abstract
In this paper, arithmetic representations for implementing multilayer perceptrons trained using the error backpropagation algorithm (MLP-BP) neural networks on field-programmable gate arrays (FPGAs) are examined in detail. Both floating-point (FLP) and fixed-point (FXP) formats are studied and the effect of precision of representation and FPGA area requirements are considered. A generic very high-speed integrated circuit hardware description language (VHDL) program was developed to help experiment with a large number of formats and designs. The results show that an MLP-BP network uses less clock cycles and consumes less real estate when compiled in an FXP format, compared with a larger and slower functioning compilation in an FLP format with similar data representation width, in bits, or a similar precision and rangeKeywords
This publication has 14 references indexed in Scilit:
- Simultaneous Perturbation Learning Rule for Recurrent Neural Networks and Its FPGA ImplementationIEEE Transactions on Neural Networks, 2005
- Error modelling of dual fixed-point arithmetic and its application in field programmable logicPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- A digital architecture for support vector machines: theory, algorithm, and fpga implementationIEEE Transactions on Neural Networks, 2003
- Analog and digital fpga implementation of brin for optimization problemsIEEE Transactions on Neural Networks, 2003
- Back propagation simulations using limited precision calculationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Learning on an analog VLSI neural network chipPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- On overfitting, generalization, and randomly expanded training setsIEEE Transactions on Neural Networks, 2000
- On the design of IEEE compliant floating point unitsIEEE Transactions on Computers, 2000
- Multiprocessor and Memory Architecture of the Neurocomputer SYNAPSE-1Published by Springer Nature ,1993
- What every computer scientist should know about floating-point arithmeticACM Computing Surveys, 1991