next up previous contents
Next: Benefit and penalty of Up: Lessons learned in the Previous: Lessons learned in the   Contents

Impact of service demand variability and prioritization in multiserver systems

In the study of ``how many servers are best?'' in multiserver systems with multiple priority classes, we find that the performance of a multiserver system can be quite different from its single server counterpart. We find that the variability of service demand and prioritization of jobs can have a much larger impact on mean response time in single server (or a-few-server) systems than in multiserver (or many-server) systems. Specifically, we find that the mean response time in single server and multiserver systems can be characterized primarily by three rules of thumb.

(i) A single server system has an advantage over multiserver systems with respect to utilization, and this relative advantage becomes greater at higher load.
(ii) A multiserver system has an advantage over single server systems with respect to reducing the impact of job size variability on the mean response time, and this advantage becomes greater at higher load and at larger job size variability.
When jobs are served in FCFS order, the mean response time of single server and multiserver systems can be characterized primarily by the above two rules. However, the above two rules are not sufficient when there are priorities among jobs, since
(iii) A multiserver system has an advantage over a single server system with respect to reducing the impact of prioritization on the mean response time of low priority jobs, and this advantage becomes greater when the mean and/or variability of the higher priority job size are larger and/or when the load of the higher priority job is higher.
We find that these three rules well characterize how the optimal number of servers is affected by job size variability and prioritization.

These rules have important implications in designing resource allocation or scheduling policies in multiserver systems. Namely, combating the variability of service demand (e.g. by prioritizing small jobs) is important in designing resource allocation policies for single server (or a-few-server) systems, but prioritizing small jobs is not as effective in improving mean response time in multiserver (or many-server) systems.

We have also studied the impact of the variability in the donor job (``long'' job in SBCS-ID and SBCS-CQ) sizes on the mean response time of the beneficiary jobs (``short'' jobs in SBCS-ID and SBCS-CQ) under various systems with cycle stealing, where the donor server processes the beneficiary jobs when there are no donor jobs. Note that variable donor job sizes imply irregular help from the donor server. We find that the impact of donor job size variability is surprisingly small, but this impact can increase when the beneficiary jobs rely more on the help from the donor server.


next up previous contents
Next: Benefit and penalty of Up: Lessons learned in the Previous: Lessons learned in the   Contents
Takayuki Osogami 2005-07-19