next up previous contents
Next: Bibliography Up: Conclusion Previous: Exploring robustness of resource   Contents

Future directions

The moment matching algorithms and DR proposed in this thesis by no means provide perfect solutions for the performance evaluation of multiserver systems. Below, we discuss interesting future directions for research along the lines of this thesis.

Interesting future directions in moment matching algorithms include the extension to matching more moments and extension to approximating a correlated sequence of random variables. Specifically, our moment matching algorithms match the first three moments of an input distribution, but it may be desirable to match more moments. Also, a PH distribution can be used to model a sequence of independent random variables; however, in some applications, it is desirable to capture the correlation in the sequence of random variables. The Markovian arrival process (MAP) can represent a sequence of correlated PH distributions. However, how we should map a sequence of correlated random variables into a MAP is not well understood (see e.g. [72] for this direction of work).

In designing moment matching algorithms that match more moments, it would be helpful to characterize the set of distributions whose first $k$-moments can be matched by an $n$-phase PH distribution for $k>3$. This will extend our characterization of set ${\cal S}^{(n)}$, the set of distributions that are well-represented by an $n$-phase acyclic PH distribution. It would also be useful to extend this characterization to the general PH distribution, including cyclic PH distributions. Such a characterization could be used to prove the minimality of the number of phases used in our closed form solutions in a stronger sense. Although our experience via numerical experiments suggests that allowing cycles in the PH distribution does not help to reduce the number of phases used to match the three moments, a rigorous characterization is not known to date.

Interesting future directions in DR include establishing a theoretical guarantee on the accuracy of DR and extending DR. Although we validate the accuracy of DR against simulation, no theoretical guarantee is proved. It is desirable to establish theoretical bounds on the error in DR. Also, although we apply DR in the analysis of the RFB/GFB processes in this thesis, the idea in DR has a broad applicability beyond the RFB/GFB processes, and DR can be extended in various ways. Below, we list possible extensions of DR. It would be interesting to pursue these possible extensions and apply them to the analysis of more complex multiserver systems.

For example, throughout this thesis, we assume that service demands have PH distributions, but in some cases, they can be extended to general distributions. For example, in the analysis of the threshold-based policy for reducing switching costs in cycle stealing, the donor's service demand and switching back time can be extended to general distributions, as we show in [151,152]. This extension relies on a (well known) analysis of the busy period in an M/GI/1 queue with a setup time.

In fact, the idea in DR can be applied to the RFB/GFB processes with a quite broad class of background processes. The analysis of the RFB/GFB processes relies on the analysis of the ``busy period'' in a QBD process via Neuts' algorithm; however, Neuts' algorithm applies to a much broader class of Markov chains, namely the M/G/1 type semi-Markov process. Fully utilizing Neuts' algorithm, DR can be applied to the RFB/GFB process with much less restriction on the background process.

Restrictions on the foreground process can also be relaxed. Our restrictions on the foreground process guarantee that the 1D Markov chains reduced from the RFB/GFB processes are QBD processes. However, the QBD process is not the only Markov chain that allows analytical tractability. For example, the M/G/1 and G/M/1 type processes can also be analyzed efficiently via matrix analytic methods [111]. Thus, the restrictions on the foreground process can be relaxed accordingly.

Further, the GFB process is limited to 2D Markov chains, but this can be extended to higher dimensions, as we extend the FB process to the RFB process. Also, we remark that the multidimensional Markov chain need not have multidimensionally infinite state space. DR can reduce a Markov chain that has a large number of states in multiple dimensions into a Markov chain that has a smaller number of states in each dimension, since matrix analytic methods and Neuts' algorithm apply to QBD processes with a finite number of levels as well.

Finally, an essential restriction in DR is that the background process must be independent of the foreground process while the background process is in levels $\geq\kappa$, although the foreground process can depend on the background process all the time. Unfortunately, it is not at all clear how DR can be extended to the symmetric case where the foreground and background processes depend on each other all the time. For example, two way cycle stealing, where two servers function as both beneficiaries and donors, does not appear to be modeled as a GFB process, since the foreground and background processes depend on each other all the time. In theory, the analysis of two way cycle stealing can be reduced to a boundary value problem (see Section 3.3), but this leads to a complex expression whose evaluation often experiences numerical instability. It would be interesting to pursue how computational approaches such as DR can be applied to an analysis of two way cycle stealing.


next up previous contents
Next: Bibliography Up: Conclusion Previous: Exploring robustness of resource   Contents
Takayuki Osogami 2005-07-19