Concluding remarks

DR allows us to analyze the performance of a multiserver system whose behavior is modeled as a multidimensional Markov chain. Since it is a nontrivial task to determine whether DR can be applied to a particular multiserver system and how, we formally define a class of multidimensional Markov chains called RFB/GFB processes (recursive foreground-background processes or generalized foreground-background processes) that can be analyzed via DR. The definition of RFB/GFB processes makes it easier to determine whether a given multiserver system can be analyzed via DR, and our analysis of RFB/GFB processes enables one to analyze the multiserver system by simply modeling it as an RFB/GFB process. In fact, many multiserver systems with resource sharing or job prioritization can be modeled as RFB/GFB processes, and we will analyze the performance of some of these multiserver systems in Part II:

- multiserver systems with multiple priority classes (see Chapter 4),
- size-based task assignment with cycle stealing under immediate dispatching,
`SBCS-ID`, for server farms (see Chapter 5), - size-based task assignment with cycle stealing under central queue,
`SBCS-CQ`, for server farms (see Chapter 5), - threshold-based policies for reducing switching costs in cycle stealing (see Chapter 6), and
- various threshold-based policies for the Beneficiary-Donor model (see Chapter 7).

Beyond an analysis of multiserver systems, DR and ideas in DR have broad applicability in computer science, engineering, operations research, etc., where Markov chains on large state spaces are involved. The ideas in DR can be used to reduce the size of the state space from infinite to finite or from large to small. For example, many optimization problems in stochastic environment can be formulated as Markov decision problems [159], but determining the optimal solution (optimal policy) is often prohibitive due to a large state space. The ideas in DR may be used to reduce the size of the state space in Markov decision processes, so that the computational complexity in solving the Markov decision problems is reduced.

In response to increased request, DR as well as its approximations, DR-PI and DR-CI, as described in this chapter are largely implemented, and the latest implementation is made available at an online code repository: