In multidatabase system, autonomous concurrency control schemes at local sites make it difficult for the global transaction manager to detect global deadlocks. There are two approaches to detect possible global deadlocks: the time-out scheme and the approximate wait-for graph(AWFG) scheme. The AWFG scheme differs from the time-out scheme in that it tries to filter false deadlocks by constructing an approximate wait-for graph after timeout. In this thesis, we evaluate the effectiveness of the AWFG scheme in reducing false deadlock and the effect of the time-out value on many performance metrics.
Our simulation studies show that the AWFG scheme is not efficient in reducing false deadlocks since it is unaware of the exact waiting states in each local DBMS. Furthermore, under the AWFG scheme, many transactions surviving time-out are aborted after all due to false deadlock, which leads to a poor system performance.
Larger time-out value reduces the false deadlock ratio but they increases the overall response time. The overall response time reaches minimum with the time-out value when the reduction in the abort rate becomes slow. Also, this optimal value is insensitive to the data contention level of local transactions, and thus it can be set according to the data contention level of global transactions. It is shown that this value should be small when the size of a global transaction is large and when the ratio of write operations is high.