In the study of ``how many servers are best?'' in multiserver systems with multiple priority classes, we find that the performance of a multiserver system can be quite different from its single server counterpart. We find that the variability of service demand and prioritization of jobs can have a much larger impact on mean response time in single server (or a-few-server) systems than in multiserver (or many-server) systems. Specifically, we find that the mean response time in single server and multiserver systems can be characterized primarily by three rules of thumb.
These rules have important implications in designing resource allocation or scheduling policies in multiserver systems. Namely, combating the variability of service demand (e.g. by prioritizing small jobs) is important in designing resource allocation policies for single server (or a-few-server) systems, but prioritizing small jobs is not as effective in improving mean response time in multiserver (or many-server) systems.
We have also studied the impact of the variability in the donor job (``long'' job in SBCS-ID and SBCS-CQ) sizes on the mean response time of the beneficiary jobs (``short'' jobs in SBCS-ID and SBCS-CQ) under various systems with cycle stealing, where the donor server processes the beneficiary jobs when there are no donor jobs. Note that variable donor job sizes imply irregular help from the donor server. We find that the impact of donor job size variability is surprisingly small, but this impact can increase when the beneficiary jobs rely more on the help from the donor server.