Parallelized formulation of the maximum likelihood-expectation maximization algorithm for fine-grain message-passing architectures

Abstract
-Recent architectural and technological advances have led to the feasibility of a new class of massively parallel processing systems based on a fine-grain, message-passing computational model. These machines provide a new alternative for the development of fast, cost-emcient Maximum Likelihood-Expectation Maximization (ML-EM) algorithmic formulations. As an important first step in determining the potential performance benefits to be garnered from such formulations, we have developed an ML-EM algorithm suitable for the high-communications, low-memory (HCLM) execution model supported by this new class of machines. Evaluation of this algorithm indicates a normalized least-square error comparable to, or better than, that obtained via a sequential raydriven ML-EM formulation and an effective speedup in execution time (as determined via discrete-event simulation of the Pica multiprocessor system currently under development at the Georgia Institute of Technology)