It has become nearly impossible to write a paper about anything in operating systems without including some discussion of performance. Usually, the performance sections concentrates on a few "microbenchmarks" which demonstrate that whatever it is which is being described in the paper can, has been, or might be efficiently implemented. For example, the time to execute a null remote procedure call, a system trap, or to access a page of a mapped file have all been used at one time or another to show that the system implementing the function is either efficient(if it was built by the authors)or inefficient(if it was built by anybody else).
Two implicit assumptions underlie the use of microbenchmarks. First, it is assumed that the time required for the microbenchmark to exercise the code path in question is the same as it is when the code path is used by relay programs. Second, there is the assumption that a microbenchmark is actually representative of something which is either important in its own right, or which has a measurable impact on over system performance. In this paper we point out the vulnerability of the first assumption by showing the significant variation that can occur with even the simple microbenchmark. Identifying weaknesses in the second assumption is something best done on a case by case basis.