Maintaining strong cache consistency in the World-Wide Web
- 22 November 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- No. 10636927,p. 12-21
- https://doi.org/10.1109/icdcs.1997.597804
Abstract
As the Web continues to explode in size, caching becomes increasingly important. With caching comes the problem of cache consistency. Conventional wisdom holds that strong cache consistency is too expensive for the Web, and weak consistency methods such as Time-To-Live (TTL) are most appropriate. The article compares three consistency approaches: adaptive TTL, polling-every-time, and invalidation, using prototype implementation and trace replay in a simulated environment. Our results show that invalidation generates less or a comparable amount of network traffic and server workload than adaptive TTL and has a slightly lower average client response time, while polling-every-time generates more network traffic and longer client response times. We show that, contrary to popular belief, strong cache consistency can be maintained for the Web with little or no extra cost than the current weak consistency approaches, and it should be maintained using an invalidation based protocol.Keywords
This publication has 5 references indexed in Scilit:
- Serverless network file systemsPublished by Association for Computing Machinery (ACM) ,1995
- A case for caching file objects inside internetworksPublished by Association for Computing Machinery (ACM) ,1993
- Measurements of a distributed file systemPublished by Association for Computing Machinery (ACM) ,1991
- Scale and performance in a distributed file systemACM Transactions on Computer Systems, 1988
- Caching in the Sprite network file systemACM Transactions on Computer Systems, 1988