Supporting Reference And Dirty Bits In SPUR's Virtual Address Cache
- 24 August 2005
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 122-130
- https://doi.org/10.1109/isca.1989.714546
Abstract
Virtual address caches can provide faster access times than physical address caches, because translation is only required on cache misses. However, because we don't check the transla- tion information on each cache access, maintaining reference and dirty bits is more difficult. In this paper we examine the trade-offs in supporting reference and dirty bits in a virtual address cache. We use measurements from a uniprocessor SPUR prototype to evaluate different alternatives. The prototype's built-in performance counters make it easy to determine the frequency of important events and to calculate performance metrics. Our results indicate that dirty bits can be efficiently emulated with protection, and thus require no special hardware support. Although this can lead to excess faults when previously cached blocks are written, these account for only 19% of the total faults, on average. For reference bits, a miss bit approximation, which checks the references bits only on cache misses, leads to more page faults at smaller memory sizes. However, the additional overhead required to maintain true reference bits far exceeds the benefits of a lower fault rate.Keywords
This publication has 6 references indexed in Scilit:
- The Sprite network operating systemComputer, 1988
- An in-cache address translation mechanismACM SIGARCH Computer Architecture News, 1986
- Microprocessor technology trendsProceedings of the IEEE, 1986
- Implementing a cache consistency protocolACM SIGARCH Computer Architecture News, 1985
- Reduced instruction set computersCommunications of the ACM, 1985
- Cache MemoriesACM Computing Surveys, 1982