Next: Lessons learned in the Up: Conclusion Previous: Conclusion   Contents

# Analytical tools developed

We have first proposed moment matching algorithms, which can be used to model a multiserver system as a (multidimensional) Markov chain for analyzing the performance of the multiserver system. Also, to prove the minimality of the number of phases used in our moment matching algorithms, the set, , of distributions that are well-represented by an -phase acyclic phase type (PH) distribution is characterized.

In developing moment matching algorithms, we have introduced various theoretical concepts such as the normalized moments, -value, sets and , function , and the EC distribution, and have proved their properties. These new concepts and their properties are found to be quite useful in developing moment matching algorithms, and they have recently stimulated the research in this area. For example, Bobbio, et al. [22] have recently used the normalized moments to provide exact characterizations of sets and , where is the set of distributions that can be well-represented by an -phase acyclic PH distribution and is the set of distributions that can be well-represented by an -phase acyclic PH distribution with no mass probability at zero. Bobbio, et al. use their exact characterization of to construct a minimal acyclic PH distribution that well-represents any distribution in .

We have also proposed a new analytical tool, dimensionality reduction (DR), for analyzing multidimensional Markov chains (Markov chains on a multidimensionally infinite state space). The key idea in DR is in approximating an infinite portion of a Markov chain, which often corresponds to the busy period'' of a system, by a collection of PH distributions, which match the first three moments of the duration of the busy period conditioned on how the Markov chain enters and exits from the busy period. As DR involves approximations, we have validated its accuracy against simulation. Overall, we find that the error in DR is quite small: specifically, the error in DR is within 3% (often within 1%) for the first order metric (such as mean delay and mean queue length) and within 7% for the second order metric (such as the second moment of queue length) for a range of parameters. To reduce the computational complexity of DR, we have introduced two approximations of DR: DR-PI and DR-CI. Although these approximations slightly worsen the accuracy, they can significantly reduce the running time.

DR allows us to analyze the performance of a multiserver system whose behavior is modeled as a multidimensional Markov chain. Since it is a nontrivial task to determine whether DR can be applied to a particular multiserver system and how, we formally define a class of multidimensional Markov chains called RFB/GFB processes (recursive foreground-background processes or generalized foreground-background processes) that can be analyzed via DR. The definition of RFB/GFB processes makes it easier to determine whether a given multiserver system can be analyzed via DR, and our analysis of RFB/GFB processes enables one to analyze the multiserver system by simply modeling it as an RFB/GFB process. In fact, many multiserver systems with resource sharing or job prioritization can be modeled as RFB/GFB processes:

• multiserver systems with multiple priority classes (Chapter 4),
• size-based task assignment with cycle stealing under immediate dispatching, SBCS-ID, for server farms (Chapter 5),
• size-based task assignment with cycle stealing under central queue, SBCS-CQ, for server farms (Chapter 5),
• threshold-based policies for reducing switching costs in cycle stealing (Chapter 6), and
• various threshold-based policies for the Beneficiary-Donor model (Chapter 7).
DR allows us to analyze these multiserver systems with high accuracy for the first time.

Beyond an analysis of multiserver systems, our moment matching algorithms and DR as well as the ideas used in these tools have broad applicability in computer science, engineering, operations research, etc. For example, many optimization problems in stochastic environment can be formulated as Markov decision problems [159] by mapping general (input) probability distributions into PH distributions via our moment matching algorithms. The ideas in DR may then be used to reduce the size of the state space in the Markov decision processes from infinite to finite or from large to small, so that the computational complexity in solving the Markov decision problems is reduced.

In response to many requests, our moment matching algorithms and DR as well as its approximations, DR-PI and DR-CI, as described in this thesis are largely implemented, and the latest implementation is made available at an online code repository:

Next: Lessons learned in the Up: Conclusion Previous: Conclusion   Contents
Takayuki Osogami 2005-07-19