A Concurrent Adaptive Conjugate Gradient Learning Algorithm On Mimd Shared-Memory Machines
- 1 June 1993
- journal article
- Published by SAGE Publications in The International Journal of Supercomputing Applications
- Vol. 7 (2) , 155-166
- https://doi.org/10.1177/109434209300700206
Abstract
A concurrent adaptive conjugate gradient learning al gorithm has been developed for training of multilayer feed-forward neural networks and implemented in C on a MIMD shared-memory machine (CRAY Y-MP/8- 864 supercomputer). The learning algorithm has been applied to the domain of image recognition. The per formance of the algorithm has been evaluated by ap plying it to two large-scale training examples with 2,304 training instances. The concurrent adaptive neural networks algorithm has superior convergence property compared with the concurrent momentum back-propagation algorithm. A maximum speedup of about 7.9 is achieved using eight processors for a large network with 4,160 links as a result of microtask ing only. When vectorization is combined with micro tasking, a maximum speedup of about 44 is realized using eight processors.Keywords
This publication has 2 references indexed in Scilit:
- Algorithms and software for large scale optimizationPublished by Office of Scientific and Technical Information (OSTI) ,1990
- Convergence Properties of Algorithms for Nonlinear OptimizationSIAM Review, 1986