Direct Cache Access for High Bandwidth Network I/O
- 27 July 2005
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- No. 10636897,p. 50-59
- https://doi.org/10.1109/isca.2005.23
Abstract
Recent I/O technologies such as PCI-Express and 10 Gb Ethernet enable unprecedented levels of I/O bandwidths in mainstream platforms. However, in traditional architectures, memory latency alone can limit processors from matching 10 Gb inbound network I/O traffic. We propose a platform-wide method called direct cache access (DCA) to deliver inbound I/O data directly into processor caches. We demonstrate that DCA provides a significant reduction in memory latency and memory bandwidth for receive intensive network I/O applications. Analysis of benchmarks such as SPECWeb9, TPC-W and TPC-C shows that overall benefit depends on the relative volume of I/O to memory traffic as well as the spatial and temporal relationship between processor and I/O memory accesses. A system level perspective for the efficient implementation of DCA is presented.Keywords
This publication has 6 references indexed in Scilit:
- TCP onloading for data center serversComputer, 2004
- On modeling and analyzing cache hierarchies using CASPERPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2004
- End system optimizations for high-speed TCPIEEE Communications Magazine, 2001
- Cache behavior of network protocolsACM SIGMETRICS Performance Evaluation Review, 1997
- Architectural considerations for a new generation of protocolsPublished by Association for Computing Machinery (ACM) ,1990
- An analysis of TCP processing overheadIEEE Communications Magazine, 1989