Using the Memory Channel Network
- 1 January 1997
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Micro
- Vol. 17 (1) , 19-25
- https://doi.org/10.1109/40.566189
Abstract
Digital has announced and shipped this first-generation, high-performance network for clusters, the Memory Channel for PCI network, and all SMP AlphaServers running Digital Unix support it. Digital has publicly demonstrated Memory Channel-connected systems running Windows/NT. The Memory Channel network does not require functionality beyond the PCI bus specification and works with any system having a PCI I/O slot. Production Memory Channel clusters can be as large as eight nodes (limited only by first-generation hardware) of 12 processors each (96 processors). One such cluster installed at Supercomputing 95 ran clusterwide applications using High Performance Fortran, PVM, and MPI. A four-node, 48 processor Memory Channel cluster, using Oracle Parallel Server, has held the record for TPC-C benchmarks since its introduction in April 1996. The same Memory Channel network used to connect this high-end database configuration also cost-effectively supports configuration of two-node, single-processor clusters. Latency over Memory Channel for a one-way, user-process-to-user-process message is 2.9 microseconds. The processor overhead is less than 150 ns for a 32-byte message. Standard message-passing APIs benefit greatly from this underlying capability.Keywords
This publication has 3 references indexed in Scilit:
- Experience with parallel computing on the AN2 networkPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Overview of memory channel network for PCIPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Digital's clusters and scientific parallel applicationsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002