Deadlock avoidance in loosely-coupled multiprocessors with finite buffer pools
- 1 April 1989
- journal article
- Published by Association for Computing Machinery (ACM) in ACM SIGOPS Operating Systems Review
- Vol. 23 (2) , 20-24
- https://doi.org/10.1145/858344.858345
Abstract
In many loosely-coupled multi-processor systems, each processor dedicates a fixed-sized portion of its memory as a message buffer pool for the sending and receiving of messages among the processors. These systems typically use these buffers as a means for clients to send requests through an interconnection facility to servers which control system resources, and for servers to send responses back to clients. The Remote Procedure Call model is a good example of a framework in which such interactions could occur {Birel184]. This type of interaction also occurs in networked environments, in particular to support distributed file system implementations. In order to simplify higher-level protocols, we assume that reliability is provided by the interconnection facility. Messages submitted are formatted into a message buffer and kept queued for sending until transmission to the destination occurs and is acknowledged. Buffer deadlock occurs when a set of the processors can no longer send or receive messages because there are no free buffers on the set of processors and none will ever be freed. The solution often employed to avoid this class of deadlock is to make the buffer pool large enough that normal system loads are unlikely to cause deadlock. Alternatively, the buffer pool is allowed to grow dynamically when the load requires it. These solutions are impractical in situations where the processors are memory-limited. We present a solution to this deadlock which avoids it by imposing a set of constraints on the structure and number of requests in the system: First, we restrict the number of outstanding server requests from any given processor to at least one less than the number of message buffers on that processor. Secondly, we require that request processing on a server cannot deadlock due to other causes while holding a message buffer, and lastly, we do not allow requests to be forwarded directly from one server to another. We prove by contradiction that these constraints are sufficient to avoid deadlocks due to buffer starvation. Since the constraints can be applied locally at each processor, this approach has the considerable advantage of not relying on global information collected using the resource we are trying to manage, that is, the message buffers. 2. Previous Work Most of the recent work on deadlock in distributed systems is concentrated in two areas. The first class deals with the general problem of clients and resources spread throughout a distributed system. Most papers discuss means whereby deadlock may be detected and then broken. These algorithms tend to send probe messages through the interconnection network to collect enough information to compute a transaction-wait -for graph and use the presence of cycles to indicate deadlock. As such, these approaches are a level above what we are considering, as they assume that it is always possible to move messages through the interconnection network. An excellent survey and bibliography for this class is given in [Elma86]. The second class dealsKeywords
This publication has 5 references indexed in Scilit:
- A survey of distributed deadlock detection algorithmsACM SIGMOD Record, 1986
- Implementing remote procedure callsACM Transactions on Computer Systems, 1984
- Distributed deadlock detectionACM Transactions on Computer Systems, 1983
- Deadlock- and livelock-free packet switching networksPublished by Association for Computing Machinery (ACM) ,1980
- System DeadlocksACM Computing Surveys, 1971