We presented an alternative covering number argument and showed that the true error rate bounds constructed using this argument are within of the lower bound on some learning problems. This is a significant improvement over prior results which just bound the ratio of the lower and upper bounds up to a constant. We also presented a simple improvement on PAC-Bayes bounds for stochastic classifiers which achieves a similar difference between the lower and upper bounds.
It is interesting to examine the relationship between the bracketing covering number and the PAC-Bayes bound. With this notion of covering number we can guarantee that all of the hypotheses covered by the same bracketing pair have similar empirical as well as true errors. Thus, we can relate the error rate of an individual hypothesis to a set of hypotheses with a significant measure — exactly the setting where the PAC-Bayes bound is tight.
Much work remains to be done in order to fulfill a quest for quantitatively tight learning bounds.