The case for controlled inconsistency in replicated data
- 4 December 2002
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
Although replication is widely accepted as a good technique for increasing reliability and availability of data, it is also known as an expensive proposition, especially when the number of replicas increases. Protocols that keep the copies consistent, such as two-phase commit, require one or more rounds of messages and have a high overhead in the overall performance. There are some applications that can run perfectly using copies that may not be consistent, as long as the application knows how much the copy can differ from the most recent version of the data. Many of the applications would rather tolerate some inconsistencies of the information in order to provide a better, faster service. A scenario is considered in which the database consists of several segments, each one controlled by a single node (or group of nodes). All the updates to a particular segment of the data take place at its controlling node, while the rest of the system is allowed to ask for quasi-copies of the data contained in the segment.Keywords
This publication has 1 reference indexed in Scilit:
- Data caching issues in an information retrieval systemACM Transactions on Database Systems, 1990