We present a concentration bound that’s more suitable for efficient numerical algorithms. Random sampling is often considered as a last resort in numerical code packages due to the associated overhead. For many iterative routines that gradually improve the answer, the samples need to meet guarantees for all possible inputs instead of the ones encountered at runtime. This stronger requirement leads to the overheads common in standard concentration bounds.
We propose analyzing the sampling process in conjunction with the outer loops. This allows us to show that preconditioners given by standard sampling methods have nearly optimal behavior in expectation. Previous constructions of these optimum preconditioners are by derandomization and take cubic time. Our results can also be viewed as variants of block Kaczmarz method and stochastic gradient descent.