next up previous contents
Next: Overview Up: Analytical tools for multiserver Previous: Future directions   Contents


Dimensionality reduction of Markov chains

How can we analyze Markov chains on a multidimensionally infinite state space?

Markov modeling is a common approach to analyzing the performance of computer systems, service facilities, and manufacturing systems. The performance (such as mean response time) of a system can be analyzed by modeling the system as a Markov chain and analyzing the stationary probabilities in the Markov chain. However, the performance analysis of a multiserver system with multiple classes of jobs has a common source of difficulty: the Markov chain that models the system behavior has a state space that grows infinitely in multiple dimensions. We start this chapter by providing two simple examples of such multidimensional Markov chains (Markov chains on a multidimensionally infinite state space).

First, consider two processors, each serving its own M/M/1 queue, where one of the processors (the ``donor'') can help the other processor (the ``beneficiary'') with its jobs, during times when the donor queue is empty. Specifically, the beneficiary jobs (respectively, donor jobs) arrive according to a Poisson process with rate $\lambda_B$ (respectively, $\lambda_D$). The service demands of these jobs have exponential distributions with rate $\mu_B$ and $\mu_D$, respectively. When there are no donor jobs and there are at least two beneficiary jobs, the donor server processes a beneficiary job, and hence the beneficiary jobs completes with rate $2\mu_B$. Figure 3.1(a) shows a Markov chain on a two-dimensionally infinite state space (2D Markov chain) that models the behavior of this system. In this Markov chain, one dimension tracks the number of donor jobs, and the other dimension tracks the number of beneficiary jobs. Since the behavior of beneficiary jobs depends on the number of donor jobs (specifically, whether there are 0 or $\geq 1$ donor jobs) in the system, the 2D Markov chain cannot simply be decomposed into two 1D Markov chains (Markov chains on a one-dimensionally infinite state space).

Figure 3.1: Examples of multidimensional Markov chains that model multiserver systems: (a) a multiserver system with cycle stealing and (b) an M/M/2 queue with two preemptive priority classes.
\includegraphics[width=.95\linewidth]{fig/MC/CS2D.eps}
(a)
\includegraphics[width=.95\linewidth]{fig/MC/Prio2D.eps}
(b)

Next, consider an M/M/2 queue with two priority classes, where high priority jobs have preemptive priority over low priority jobs. Specifically, the low (respectively, high) priority jobs arrive according to a Poisson process with rate $\lambda_L$ (respectively, $\lambda_H$). The service demands of these jobs have exponential distributions with rate $\mu_L$ and $\mu_H$, respectively. When there are $n$ high priority jobs, only $\max\{2-n,0\}$ servers are available for low priority jobs. Figure 3.1(b) shows a 2D Markov chain that models the behavior of this system. In this Markov chain, one dimension tracks the number of high priority jobs, and the other dimension tracks the number of low priority jobs. Again, since the behavior of low priority jobs depends on the number of high priority jobs in the system, the 2D Markov chain cannot simply be decomposed into two 1D Markov chains. As we will see, when there are $m$ priority classes, the performance analysis of the lowest priority class involves an $m$D Markov chain (Markov chain on an $m$-dimensionally infinite state space).

The goal of this chapter is to provide an analytical tool, which we refer to as dimensionality reduction (DR), that allows us to analyze these and more complex multidimensional Markov chains. DR reduces a multidimensional Markov chain to a 1D Markov chain, which closely approximates the multidimensional Markov chain and can be analyzed efficiently. We will define a broad class of (multidimensional) Markov chains, which we will refer to as RFB/GFB processes (recursive foreground-background processes or generalized foreground-background processes), that allow us to model many interesting multiserver systems with multiple classes of jobs, and then show an analysis of these Markov chains via DR. DR involves approximations, but we will show that the mean response time computed via DR is usually within a few percent of the simulated value. DR will enable one to analyze the performance of many multiserver systems by simply modeling them as RFB/GFB processes.



Subsections
next up previous contents
Next: Overview Up: Analytical tools for multiserver Previous: Future directions   Contents
Takayuki Osogami 2005-07-19