Implementation of artificial neural networks on a reconfigurable hardware accelerator
- 25 June 2003
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
The hardware implementation of three different artificial neural networks is presented. The basis for the implementation is the reconfigurable hardware accelerator RAPTOR2000, which is based on FPGAs. The investigated neural network architectures are neural associative memories, self-organizing feature maps and basis function networks. Some of the key implementational issues are considered. Especially resource-efficiency and performance of the presented realizations are discussed.Keywords
This publication has 14 references indexed in Scilit:
- Performance evaluation of self generating radial basis function for function approximationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2005
- SOM hardware with acceleration module for graphical representation of the learning processPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- A scalable processor array for self-organizing feature mapsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- SOM accelerator systemNeurocomputing, 1998
- A high performance SOFM hardware-systemPublished by Springer Nature ,1997
- Self-Organizing MapsPublished by Springer Nature ,1995
- SYNAPSE—A neurocomputer that synthesizes neural algorithms on a parallel systolic engineJournal of Parallel and Distributed Computing, 1992
- A Highly Parallel Digital Architecture for Neural Network EmulationPublished by Springer Nature ,1991
- Associative MemoryPublished by Springer Nature ,1977
- Non-Holographic Associative MemoryNature, 1969