Margin bounds in machine learning theory apply to arbitrary mixture-of-experts hypotheses of the form:

h(x) = sign(sum(a_{i}* g_{i}(x)))

where g_{i}(x) predicts either 1 or -1.

I will discuss the original margin bound by Schapire, Freund, Bartlett, and Lee:

Robert E. Schapire, Yoav Freund, Peter Bartlett and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651-1686, 1998.

Then I will present some new work which functionally tightens the bound and has implications for mixture-of-expert learning algorithm design.