The goal of this chapter is making PAC-Bayes bounds more applicable. This is done by tightening the analysis from a Hoeffding-like to a Chernoff-like statement, and by noting that we can use monte-carlo evaluation to safely bound the stochastic empirical error rate quickly.
In work detailed in chapter 13, results for the application of PAC-Bayes bounds to stochastic neural networks is presented. PAC-Bayes bounds are one of very few approaches capable of producing nonvacuous learning theory bounds on continuous valued classifiers for real-world problems.