next up previous contents
Next: Definition of MAP Up: Markovian arrival process Previous: Markovian arrival process   Contents

Examples of Markovian arrival processes

We start by providing canonical examples of MAPs. Again, we provide both pictorial explanation and more formal explanation. We will view a MAP as a point process, which is a random sequence of ``events'' such as the epochs of job arrivals.

Figure 3.9: Examples of Markovian arrival processes.
\includegraphics[width=.3\linewidth]{fig/Poisson.eps}
\includegraphics[width=.45\linewidth]{fig/E2_MAP.eps}
\includegraphics[width=.55\linewidth]{fig/MMPP2.eps}
(a) Poisson
(b) PH (Erlang-2) renewal
(c) MMPP(2)

First, a Poisson process is a MAP. In a Poisson process, the intervals between consecutive events are independent and identically distributed exponential random variables. Figure 3.9(a) illustrates a Poisson process as the epochs of transitions in a Markov chain. When there is a transition (from a state to itself) in the Markov chain, there is an event in the Poisson process. Recall the birth-and-death process modeling an M/M/1 queue (Figure 3.8(a)), where jobs arrive according to a Poisson process with rate $\lambda$. At the moment of an ``event'' in the Poisson process, the number of jobs (or the level of the birth-and-death process) is incremented by one.

Second, a PH renewal process is a MAP. In a PH renewal process, the intervals between consecutive events are independent and identically distributed with a PH distribution. Figure 3.9(b) illustrates a PH renewal process, when the PH distribution is an Erlang-2 distribution, as the epochs of some transitions in a Markov chain. There is a transition (thick arrow) associated with an event. This transition corresponds the transition into the absorbing state in the Markov chain whose absorption time defines the Erlang-2 distribution (see Figure 2.6(b)). Thus, a PH renewal process has an event when the Markov chain (for the PH distribution) transitions into the absorbing state, and at this moment the Markov chain immediately transitions back to a initial state according to the initial probability vector. Figure 3.10(a) shows a QBD process modeling an Er/M/1 queue, where jobs arrive according to a PH renewal process shown in Figure 3.9(b), and their service demand has an exponential distribution with rate $\mu$.

Figure 3.10: QBD processes for MAP/M/1 queues.
\includegraphics[width=0.9\linewidth]{fig/MC/ErM1.eps}
(a) Er/M/1
\includegraphics[width=0.9\linewidth]{fig/MC/QBDprocess.eps}
(b) MMPP(2)/M/1

More formally, consider the PH( $\Vec\tau,\mathbf{T}$) distribution (as defined in Section 2.2). Let $\Vec{t}=-\mathbf{T}\Vec{1}$. Matrix $\mathbf{D} = \mathbf{T} + \Vec{t}\Vec\tau$, defines the generator matrix of a Markov chain. For example, the above Erlang-2 distribution has parameters

\begin{displaymath}
\Vec\tau = (1,0)
\quad\mbox{and}\quad
\mathbf{T} = \left(\be...
...y}{cc}
-\lambda & \lambda \\
0 & -\lambda
\end{array}\right),
\end{displaymath}

and hence

\begin{displaymath}
\Vec{t}\Vec\tau = \left(\begin{array}{c}
0 \\
\lambda
\end{...
...ft(\begin{array}{cc}
0 & 0 \\
\lambda & 0
\end{array}\right).
\end{displaymath}

The nonzero $(i,j)$ element of $\Vec{t}\Vec\tau$ is a transition associated with an event. This transition can be seen as a transition into the absorbing state followed by an instantaneous transition into a initial state following the initial probability vector $\Vec\tau$. The $(i,j)$ element of $\mathbf{T}$ is a transition that is not associated with an event for $i\neq j$.

Our third example, a Markov modulated Poisson process (MMPP), allows correlation between inter-event times. Figure 3.9(c) illustrates an MMPP of order 2, MMPP(2), as the epochs of some transitions in a Markov chain. There are transitions (thick arrows) associated with events. The transitions between the two states are not associated with events. While the Markov chain is in state 1, events occur with rate $\lambda_1$, and while the Markov chain is in state 2, events occur with rate $\lambda_2$. For example, an MMPP(2) can be seen as a job arrival process which alternates between high arrival period (state 1) and low arrival period (state 2). During the high arrival period, the arrival rate is $\lambda_1$, and during the low arrival period, the arrival rate is $\lambda_2$, where $\lambda_1 > \lambda_2$. Figure 3.10(b) shows a QBD process modeling an MMPP(2)/M/1 queue, where jobs arrive according to a MMPP(2) process shown in Figure 3.9(c), and their service demand has an exponential distribution with rate $\mu$.

More formally, the Markov chain in Figure 3.9(c) has infinitesimal generator

\begin{displaymath}
\mathbf{D} = \mathbf{D}_0 + \mathbf{D}_1,
\end{displaymath}

where

\begin{displaymath}
\mathbf{D}_0 = \left(\begin{array}{cc}
-(\alpha_{12}+\lambda...
...array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2
\end{array}\right).
\end{displaymath}

The nonzero $(i,j)$ element of $\mathbf{D}_1$ is a transition associated with an event in the MMPP(2), and the $(i,j)$ element of $\mathbf{D}_0$ (here, $i\neq j$) is a transition that is not associated with an event.


next up previous contents
Next: Definition of MAP Up: Markovian arrival process Previous: Markovian arrival process   Contents
Takayuki Osogami 2005-07-19