next up previous contents
Next: Validation Up: Distribution and moments of Previous: Response time of high   Contents

Response time of low priority jobs

Next, we derive the response time of the low priority (class L) jobs, again by conditioning on the state that an arrival sees via PASTA. Recall the Markov chain in Figure 3.28(b). Let ${\rm Pr}(T_L\leq x\mid i,\ell)$ be the distribution function of the response time of the class L job that sees state $(i,\ell)$, i.e. phase $i$ of level $\ell$, when it arrives, and $\mbox{{\bf\sf E}}\left[ (T_L)^r\mid i,\ell \right]$ be its $r$-th moment. The distribution function of the response time of the class L jobs and its $r$-th moment are then given by

$\displaystyle {\rm Pr}(T_L\leq x)$ $\textstyle =$ $\displaystyle \sum_{\ell=0}^\infty \sum_{i} {\rm Pr}(T_L\leq x\mid i,\ell) (\Vec{\pi_\ell}^{(H)})_i$ (3.11)
$\displaystyle \mbox{{\bf\sf E}}\left[ (T_L)^r \right]$ $\textstyle =$ $\displaystyle \sum_{\ell=0}^\infty \sum_i \mbox{{\bf\sf E}}\left[ (T_L)^r\mid i,\ell \right] (\Vec{\pi_\ell}^{(H)})_i.$ (3.12)

Thus, our goal is to compute ${\rm Pr}(T_L\leq x\mid i,\ell)$ and $\mbox{{\bf\sf E}}\left[ (T_L)^r\mid i,\ell \right]$ for each $(i,\ell)$.

Observe that a tagged (class L) arrival that sees state $(i,\ell)$ will cause the system state to change to $(i,\ell+1)$ at that moment. We remove all the $\lambda_L$ arcs from the Markov chain in Figure 3.28(b), so that there are no more class L arrivals. (Note that the behavior of the tagged arrival is not affected by the class L jobs that will arrive after the tagged arrival, since class L jobs are served in FCFS order and we assume that the ``youngest'' class L job is preempted by a high priority job.) This enables us to view the response time for the tagged arrival as the first passage time of this modified Markov chain from state $(i,\ell+1)$ to the state where the tagged arrival departs. The only complexity is in figuring out exactly in which state the tagged arrival departs.

If the tagged arrival sees state $(i,\ell=0)$ for some $i$, its response time is the passage time from state $(i,\ell=1)$ to (any state in) level $\ell=0$ in the modified Markov chain, since the tagged arrival is the only class L job in the modified system. If the tagged arrival sees a state in level $\ell\geq 1$ (and the system state is changed to level $\ell+1$), the tagged arrival may depart when the modified Markov chain hits level 1 or level 0, depending on whether the tagged arrival is the last job to be completed or not. We will compute the response time by conditioning on the first state in level $\ell=1$ that we hit in the modified Markov chain. Note that the first state in level $\ell=1$ is either $(0,\ell=1)$ or $(1,\ell=1)$, since there are no backward transitions in the bottom two rows in the (modified) Markov chain.

If $(1,\ell=1)$ is the first state in level $\ell=1$, we must have gotten there from $(1,\ell=2)$. This implies that the remaining class L job is the tagged arrival, since class L jobs are served in FCFS order and we assume that the ``youngest'' class L job is preempted by a high priority jobs. Thus, in this case the response time of the tagged arrival that sees $(i,\ell)$ upon its arrival is the passage time from $(i,\ell+1)$ to $(1,\ell=1)$ plus the passage time from $(1,\ell=1)$ to level 0.

If $(0,\ell=1)$ is the first state in level $\ell=1$, we must have gotten there from state $(0, \ell=2)$. This implies that the remaining class L job is equally likely to be the tagged arrival or not by the memoryless property of exponential distributions. Thus, in this case the response time of the tagged arrival that sees $(i,\ell)$ upon its arrival is the passage time from $(i,\ell+1)$ to $(0,\ell=1)$ with probability $\frac{1}{2}$, and the passage time from $(i,\ell+1)$ to $(0,\ell=1)$ plus the passage time from $(0,\ell=1)$ to level 0 with probability $\frac{1}{2}$.

Figure 3.29(b) summarizes the above argument. The response time of a job that sees state $(i,\ell)$ upon its arrival is the passage time from state $(i,\ell+1)$ to state 0 in the Markov chain in Figure 3.29(b). Observe that Figure 3.29(b) is obtained from Figure 3.28(b) by removing all the transitions with $\lambda_L$'s and splitting the transition (with rate $2\mu_L$) from state $(0, \ell=2)$ to state $(0,\ell=1$) into two transitions (each with rate $\mu_H$), one to state $(0,\ell=0)$ and the other to state 0.

The passage time from any state $(i,\ell)$ to state 0 in the Markov chain in Figure 3.29(b) is a PH distribution (as defined in Section 2.2), where state $(i,\ell)$ is the initial state and state 0 is the absorbing state. Note that the distribution function and moments of the PH distribution can be computed via Proposition 4. In fact, by (3.11), the distribution of the response time of the low priority jobs can be represented as a single PH distribution (a mixture of PH distributions is a PH distribution by Proposition 3), where the initial state is $(i,\ell)$ with probability $(\Vec{\pi_{\ell+1}})_i$ for all $i$ and $\ell\geq 0$, and state 0 is the absorbing state. To compute the distribution function of the single PH distribution and its moments via Proposition 4, the state space of the Markov chain in Figure 3.29(b) needs to be truncated. One can simply truncate it at some large level, or one can use DR to reduce the 1D Markov chain into a Markov chain on a finite state space as we reduce Figure 3.4(b) into Figure 3.5.

Alternatively, the moments of the passage time (from level $\ell$ to level 0) can be computed using the method provided in Section 3.7, which can be more computationally efficient than the approach via Proposition 4. Note that there are no forward transitions in the modified Markov chain, and the expressions in Section 3.7 are significantly simplified: specifically, matrices $\mbox{\boldmath {$\cal{F}$}}^{(\ell)}$ and $\mbox{\boldmath {$\cal{F}$}}_r^{(\ell)}$ are zero matrices in the modified Markov chain for all $\ell$ and $r$.

In general, the mean response time in the system that is modeled as an RFB/GFB process can be derived via Little's law, as long as the number of jobs is well defined for each state. However, the response time may or may not be represented as a passage time in a modified Markov chain, and its distribution and moments may or may not be analyzed via the approach introduced in this section. Also, the mean number of jobs in the queue (number of jobs that are waiting and not in service) can be derived via Little's law, but its distribution and moments may or may not be derived via an analysis of passage times. Also, note again that the error in the higher moments may be larger, since we match only the first three moments of the ``busy period'' in DR.


next up previous contents
Next: Validation Up: Distribution and moments of Previous: Response time of high   Contents
Takayuki Osogami 2005-07-19