An integrated compile-time/run-time software distributed shared memory system
- 1 September 1996
- journal article
- Published by Association for Computing Machinery (ACM) in ACM SIGOPS Operating Systems Review
- Vol. 30 (5) , 186-197
- https://doi.org/10.1145/248208.237181
Abstract
On a distributed memory machine, hand-coded message passing leads to the most efficient execution, but it is difficult to use. Parallelizing compilers can approach the performance of hand-coded message passing by translating data-parallel programs into message passing programs, but efficient execution is limited to those programs for which precise analysis can be carried out. Shared memory is easier to program than message passing and its domain is not constrained by the limitations of parallelizing compilers, but it lags in performance. Our goal is to close that performance gap while retaining the benefits of shared memory. In other words, our goal is (1) to make shared memory as efficient as message passing, whether hand-coded or compiler-generated, (2) to retain its ease of programming, and (3) to retain the broader class of applications it supports.To this end we have designed and implemented an integrated compile-time and run-time software DSM system. The programming model remains identical to the original pure run-time DSM system. No user intervention is required to obtain the benefits of our system. The compiler computes data access patterns for the individual processors. It then performs a source-to-source transformation, inserting in the program calls to inform the run-time system of the computed data access patterns. The run-time system uses this information to aggregate communication, to aggregate data and synchronization into a single message, to eliminate consistency overhead, and to replace global synchronization with point-to-point synchronization wherever possible.We extended the Parascope programming environment to perform the required analysis, and we augmented the TreadMarks run-time DSM library to take advantage of the analysis. We used six Fortran programs to assess the performance benefits: Jacobi, 3D-FFT, Integer Sort, Shallow, Gauss, and Modified Gramm-Schmidt, each with two different data set sizes. The experiments were run on an 8-node IBM SP/2 using user-space communication. Compiler optimization in conjunction with the augmented run-time system achieves substantial execution time improvements in comparison to the base TreadMarks, ranging from 4% to 59% on 8 processors. Relative to message passing implementations of the same applications, the compile-time run-time system is 0-29% slower than message passing, while the base run-time system is 5-212% slower. For the five programs that XHPF could parallelize (all except IS), the execution times achieved by the compiler optimized shared memory programs are within 9% of XHPF.Keywords
This publication has 19 references indexed in Scilit:
- TreadMarks: shared memory computing on networks of workstationsComputer, 1996
- Reducing false sharing on shared memory multiprocessors through compile time data transformationsPublished by Association for Computing Machinery (ACM) ,1995
- Techniques for reducing consistency-related communication in distributed shared-memory systemsACM Transactions on Computer Systems, 1995
- Analysis and transformation in an interactive parallel programming toolConcurrency: Practice and Experience, 1993
- Integrating message-passing and shared-memoryPublished by Association for Computing Machinery (ACM) ,1993
- Compiling Fortran D for MIMD distributed-memory machinesCommunications of the ACM, 1992
- Network-based concurrent computing on the PVM systemConcurrency: Practice and Experience, 1992
- Orca: a language for parallel programming of distributed systemsIEEE Transactions on Software Engineering, 1992
- An implementation of interprocedural bounded regular section analysisIEEE Transactions on Parallel and Distributed Systems, 1991
- Memory coherence in shared virtual memory systemsACM Transactions on Computer Systems, 1989