next up previous contents
Next: Response time of low Up: Distribution and moments of Previous: Distribution and moments of   Contents

Response time of high priority jobs

We start with the simpler case of the high priority (class H) jobs. We derive the response time of the class H jobs by conditioning on the state that an arrival sees. Recall the Markov chain in Figure 3.28(a). By PASTA (Poisson arrivals see time averages), a class H arrival will see state $\ell$ with probability $\pi_\ell^{(H)}$ when it arrives. Let ${\rm Pr}(T_H\leq x\mid\ell)$ be the distribution function of the response time of the class H job that sees state $\ell$ when it arrives, and $\mbox{{\bf\sf E}}\left[ (T_H)^r\mid\ell \right]$ be its $r$-th moment. Then the distribution function of the response time of the class H jobs and its $r$-th moment are given by

\begin{displaymath}
{\rm Pr}(T_H\leq x) = \sum_{\ell=0}^\infty {\rm Pr}(T_H\leq ...
...mbox{{\bf\sf E}}\left[ (T_H)^r\mid\ell \right] \pi_\ell^{(H)}.
\end{displaymath}

Thus, our goal is to derive ${\rm Pr}(T_H\leq x\mid\ell)$ and $\mbox{{\bf\sf E}}\left[ (T_H)^r\mid\ell \right]$ for each $\ell$.

First, observe that a tagged (class H) arrival that sees state $\ell$ will cause the system state to change to $\ell+1$ at that moment. We remove all the $\lambda_H$ arcs from the Markov chain in Figure 3.28(a), so that there are no more arrivals. (Note that the behavior of the tagged arrival is not affected by the jobs that will arrive after the tagged arrival, since class H jobs are served in FCFS order.) This enables us to view the response time for the tagged arrival as the passage time of this modified Markov chain from state $\ell+1$ to the state where the tagged arrival departs. The only complexity is in figuring out exactly in which state the tagged arrival departs.

If the tagged arrival sees state $\ell=0$, its response time is the same as its service time, which has exponential distribution with rate $\mu_H$. If the tagged arrival sees state $\ell\geq 1$ (and the system state is changed to $\ell+1$), the tagged arrival may depart when the modified Markov chain hits state 1 or state 0, depending on whether the tagged arrival is the last job to be completed or not. Note that by the memoryless property of exponential distributions, the tagged arrival is the last job to be completed with probability $\frac{1}{2}$. Thus, the response time of the tagged job is the passage time to go from state $\ell+1$ to 0 with probability $\frac{1}{2}$, and the passage time to go from state $\ell+1$ to 1 with probability $\frac{1}{2}$. Observe that the distribution of each of these passage times is simply a convolution of independent exponential distributions, and its moments can be derived easily.

Figure 3.29(a) summarizes the above argument. The response time of a job that sees state $\ell$ upon its arrival is the passage time from state $\ell+1$ to state 0 in the Markov chain in Figure 3.29(a). Observe that Figure 3.29(a) is obtained from Figure 3.28(a) by removing all the transitions with $\lambda_H$'s and splitting the transition from state 2 to state 1 (with rate $2\mu_H$) into two transitions, one to state 1 and the other to state 0 (each with rate $\mu_H$).

Figure 3.29: Markov chains used to compute the response time in an M/M/2 queue with two preemptive priority classes.
\includegraphics[width=.95\linewidth]{fig/MC/PrioH1Drestime.eps}
\includegraphics[width=.95\linewidth]{fig/MC/Prio1Drestime.eps}
(a) high priority jobs
(b) low priority jobs


next up previous contents
Next: Response time of low Up: Distribution and moments of Previous: Distribution and moments of   Contents
Takayuki Osogami 2005-07-19