Optimal prefetching via data compression
- 9 December 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- p. 121-130
- https://doi.org/10.1109/sfcs.1991.185360
Abstract
A form of the competitive philosophy is applied to the problem of prefetching to develop an optimal universal prefetcher in terms of fault ratio, with particular applications to large-scale databases and hypertext systems. The algorithms are novel in that they are based on data compression techniques that are both theoretically optimal and good in practice. Intuitively, in order to compress data effectively, one has to be able to predict feature data well, and thus good data compressors should be able to predict well for purposes of prefetching. It is shown for powerful models such as Markov sources and mth order Markov sources that the page fault rates incurred by the prefetching algorithms presented are optimal in the limit for almost all sequences of page accesses.Keywords
This publication has 13 references indexed in Scilit:
- Analysis of arithmetic coding for data compressionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- On the necessity of Occam algorithmsPublished by Association for Computing Machinery (ACM) ,1990
- Learnability and the Vapnik-Chervonenkis dimensionJournal of the ACM, 1989
- Occam's RazorInformation Processing Letters, 1987
- Large deviations, hypotheses testing, and source coding for finite Markov chainsIEEE Transactions on Information Theory, 1985
- Amortized efficiency of list update and paging rulesCommunications of the ACM, 1985
- Data Compression Using Adaptive Coding and Partial String MatchingIEEE Transactions on Communications, 1984
- An Introduction to Arithmetic CodingIBM Journal of Research and Development, 1984
- A note on the Ziv - Lempel model for compressing individual sequences (Corresp.)IEEE Transactions on Information Theory, 1983
- On the Complexity of Finite SequencesIEEE Transactions on Information Theory, 1976