We consider common resource sharing policies for multiserver systems including cycle stealing policies, priority queueing, task assignment policies, and threshold policies. Even these simple policies are already very difficult to analyze because their underlying Markov chain structure grows infinitely in more than one dimension. The dimensionality of the Markov chain is typically equal to the number of servers or the number of job classes. We introduce a new analysis technique which we call Dimensionality Reduction for Markov Chains, that enables the first near-exact analysis of many fundamental resource sharing problems for multiserver systems.
The talk will focus on the new insights obtained by analyzing these policies and proposals for improved policies. We will consider questions such as: When does cycle stealing pay? How do different cycle stealing methods compare? How does the multiserver priority queue compare with the single server priority queue? What is the effect of variability in service demand? Under threshold-based load sharing, where a ``donor'' machine helps a ``beneficiary'' machine at some threshold point, who should control the threshold: the donor or the beneficiary? How many thresholds does one need?