Evaluating InfiniBand performance with PCI Express
- 4 April 2005
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Micro
- Vol. 25 (1) , 20-29
- https://doi.org/10.1109/mm.2005.9
Abstract
The InfiniBand architecture is an industry standard that offers low latency and high bandwidth as well as advanced features such as remote direct memory access (RDMA), atomic operations, multicast, and quality of service. InfiniBand products can achieve a latency of several microseconds for small messages and a bandwidth of 700 to 900 Mbytes/s. As a result, it is becoming increasingly popular as a high-speed interconnect technology for building high-performance clusters. The Peripheral Component Interconnect (PCI) has been the standard local-I/O-bus technology for the last 10 years. However, more applications require lower latency and higher bandwidth than what a PCI bus can provide. As an extension, PCI-X offers higher peak performance and efficiency. InfiniBand host channel adapters (HCAs) with PCI Express achieve 20 to 30 percent lower latency for small messages compared with HCAs using 64-bit, 133-MHz PCI-X interfaces. PCI Express also improves performance at the MPI level, achieving a latency of 4.1μs for small messages. It can also improve MPI collective communication and bandwidth-bound MPI application performance.Keywords
This publication has 5 references indexed in Scilit:
- High Performance RDMA-Based MPI Implementation over InfiniBandInternational Journal of Parallel Programming, 2004
- Microbenchmark performance comparison of high-speed cluster interconnectsIEEE Micro, 2004
- End-to-end performance of 10-gigabit ethernet on commodity systemsIEEE Micro, 2004
- The Quadrics network: high-performance clustering technologyIEEE Micro, 2002
- The Virtual Interface ArchitectureIEEE Micro, 1998