next up previous contents
Next: State of the art Up: Threshold based policy for Previous: Threshold based policy for   Contents

Analysis

With the above assumptions, the system under the threshold-based policy for reducing switching costs in cycle stealing can be modeled as a GFB (generalized foreground-background) process, as discussed in Section 3.4. An analysis of the GFB process via dimensionality reduction (DR) provides mean response time of the beneficiary jobs and the donor jobs, respectively. In fact, the donor job size, $X_B$, and the switching back time, $K_{ba}$, can be extended to general distributions, as we show in [152]. All the details of the analysis to generate the results in Section 6.4 are provided in [152]. In particular, the mean response time of donor jobs has an alternative derivation via an M/GI/1 queue with generalized vacations [57], as shown in the following theorem:

Theorem 11   The mean response time of donor jobs, $\mbox{{\bf\sf E}}\left[ T_D \right]$, is given by

\begin{eqnarray*}
\mbox{{\bf\sf E}}\left[ T_D \right] & = & \mbox{{\bf\sf E}}\le...
...ambda_D\mbox{{\bf\sf E}}\left[ K_{ba} \right])p + (1-p)\right)},
\end{eqnarray*}

where
\begin{displaymath}
p = \frac{P_{N_D^{th}-1}} {P_{N_D^{th}-1}+P_0}
\end{displaymath} (6.1)

where $P_0$ is the probability that the donor server is at the donor queue and the number of donor jobs is zero; $P_{N_D^{th}-1}$ is the probability that the donor server is either at the beneficiary queue or in the process of switching to the beneficiary queue and the number of donor jobs is $N_D^{th}-1$.

Note that $P_0$ and $P_{N_D^{th}-1}$ can be calculated via the limiting probabilities in the 1D Markov chain reduced from the GFB process in the analysis of the mean response time of the beneficiary jobs via DR.


next up previous contents
Next: State of the art Up: Threshold based policy for Previous: Threshold based policy for   Contents
Takayuki Osogami 2005-07-19