15-859(B) Machine Learning Theory 02/18/08 * Margins bounds and luckiness functions. ===================================================================== Learning linear separators ========================== Given data in R^n, labeled +,-. Want to find a linear separator. Can do with a number of algorithms including linear programming. Notice that this includes decision lists, so pretty much everything we've seen can be transformed into a linear separator problem, with suitable definition of features. (e.g., for k-decision lists) Some things we've seen: - VC-dim of linear separators in n-dim space is n+1. So this says that O(1/epsilon[n*log(1/epsilon)+log(1/delta)] examples are sufficient for learning. - But, we've seen that if there's a large margin, can get away with less. Let's assume target is w^* . x > 0, and has zero error. |w^*| = 1, and all |x| <=1. Given a sample S, define the margin to be: gamma = min_{x in S} |w^* . x| For example, Perceptron algorithm makes at most 1/gamma^2 mistakes. This means, using good "online->batch" conversions, we just need a training set of size: O((1/epsilon)[M + log(1/delta)]) M = mistake bound = 1/gamma^2. - One view: perceptron alg is not so good, since gamma can be exponentially small in n. (like in case of decision lists) Alternative view: on the other hand, gamma can often be fairly large "the large margin case", in which case this is a better bound: depends only on the margin, not on the dimension. Let's do this online->batch conversion formally, since we didn't quite do it earlier in class. Theorem: If have conservative alg with mistake-bound M, can use to get PAC sample-complexity O((1/epsilon)[M + log(1/delta)]) Proof: To do this, we will split data into a ``training set'' S_1 of size max[(4M/epsilon), (16/epsilon)*ln(1/delta)] and a ``test set'' S_2 of size (32/epsilon)*ln(M/delta) We will run the algorithm on S_1 and test all hypotheses produced on S_2. Claim 1: w.h.p., at least one hyp produced on S_1 has error < epsilon/2. Proof (tricky!): - If all are >= epsilon/2 then the expected number of mistakes >= 2M. - By Chernoff, Pr[# mistakes <= M] <= e^{(-expect)/8} <= delta. - View as game: after M mistakes, alg forced to reveal target. If alg keeps giving bad hyps, then whp will be forced to do it. Claim 2: w.h.p., best one on S_2 has error < epsilon. Proof: Suffices to show that good one is likely to look better than 3*epsilon/4, and all with true error > epsilon are likely to look worse than 3*epsilon/4. Just apply Chernoff again to the set of M hypotheses as in your homework. More about margins ================== - We've seen several reasons why having a large margin is a good thing. (Perceptron, Johnson-Lindenstrauss, random separators + boosting). It turns out another thing we can say is that whp, *any* separator with a large margin over the data will have low error. This then motivates finding the maximum margin separator, which is what is done in Support Vector Machines. (Can also approximate it using Perceptron by doing an update on examples that it just gets "barely correct".) Sample complexity analysis ========================== The sample complexity analysis is done in two steps. First thing to show: what is the maximum number of points that can be split in all possible ways by a separator of margin at least gamma? a.k.a., "fat-shattering dimension". Ans: O(1/gamma^2). Can anyone see a simple proof? Proof: simple proof is just to consider perceptron algorithm. Suppose gamma-fat-shattering dimension is d. Then can force perceptron alg to make d mistakes, and yet still have a separator w^* of margin gamma. But we know the number of mistakes is at most 1/gamma^2. So, that's it. Second part: now want to apply this to get a sample-complexity bound. Seems like analysis we used for VC-dimension should just go right through, but it's actually not so easy, plus there's one technical fact we'll need. Let's do the analysis and will just give a citation for the technical fact we need. Analysis: Draw 2m points from D. Want to show it is unlikely there exists a separator that gets first half correct by margin gamma, but has more than epsilon*m mistakes on the 2nd half. This then implies the conclusion we want, by same reasoning as when we argued the VC bounds. As in VC proof, will show stronger statement: for *any* set S of size 2m, whp this is true over randomization of split into two pieces S1,S2 of size m. In VC argument, we said: fix some h that makes at least epsilon*m mistakes. Showed that Prob(all mistakes are on 2nd half) is at most 2^{-epsilon*m}. Then applied union bound over all labelings of data using h in C. For us, it's tempting to say "let's count the number of separators of S with margin gamma over all of S", but this might be undercounting since what about separators where h only has margin gamma on S1? Instead, we'll do the following more complicated thing. First, let's assume all |x|=1. Now let's group the separators together. Define h(x) = h.x but truncated at +/- gamma. Let dist_S(h1,h2) to be max_{x in S}|h1(x) - h2(x)|. We want a "gamma/2-cover": a set H of separators such that every other separator is within gamma/2 of some separator in H. Claim is: there exists an H that is not too large, as a function of fat-shattering dimension [Alon et al]. Roughly you get |H| ~ (m/gamma^2)^(log(m)/gamma^2). Now, for these guys, define "correct" as "correct by margin at least gamma/2" and define "mistake" as "mistake OR correct by less than gamma/2". Our standard VC argument shows that so long as m is large compared to (1/epsilon)*log(|H|/delta), whp, none of these will get S1 all correct, and yet make > epsilon*m "mistakes" on S2. This then implies (by defn of H) that whp *no* separator gets S1 correct by margin >= gamma and has > epsilon*m real mistakes on S2. log(|H|) is approximately log^2(m)/gamma^2, so in the end you get a bound of m = O(1/epsilon [1/gamma^2 log^2(1/(gamma*epsilon)) + log(1/delta)]). Luckiness functions =================== Basic idea of margins was in essense to view some separators as "simpler" than others, using margin as the notion of "simple". What makes this different from our Occam bounds, is that the notion of "simple" depends on the data. Basically, we have a data-dependent ordering of functions such that if we're lucky and the the target has low complexity in this ordering, then we don't need much training data. More generally, things like this are called "luckiness functions". If a function is a "legal notion of luckiness" (basically, the ordering depends only on the data points and not their labels, and not too many splits of data with small complexity) then you can apply sample complexity bounds. Support-Vector Machines ======================= Support vector machines do convex optimization to find the maximum margin separator, and more generally to optimize a given tradeoff between margin and hinge-loss. Let's first do the easier case, where we assume data is linearly separable and we want the separator of maximum margin. Then we could write that as: minimize |w|^2 subject to the constraint that l(x)*(w.x) >= 1 for all examples x in our training set (l(x) is the label of x). This is a convex optimization problem, so we can do it. Equivalently we could fix |w|^2 <= 1 and maximize the RHS gamma, but people like to do it this way, for reasons that will make more sense in a minute. Note that if we set the RHS to 1, then 1/|w| = gamma and so |w|^2 = 1/gamma^2. More generally, we want a tradeoff betwen margin and hinge-loss, so what SVMs really do is (C is a given constant): minimize: |w|^2 + C sum_{x in S} epsilon_x subject to: l(x)(w.x) >= 1 - epsilon_x, for all x in S. and epsilon_x >= 0 for all x in S. Here is the motivation. What we *really* want is to minimize the true error err(h), but what we observe is our empirical error err_S(h). So, let's split err(h) into two parts: (1) err(h) - err_S(h), which is the amount we're overfitting, and (2) err_S(h). One bound on part (1) is approximately (1/gamma^2)/|S|, if err_S(h) is small (since our sample complexity bound was approximately |S| = (1/gamma^2)(1/epsilon)). So, this is 1/|S| times the first part of the objective function. The second part of the optimization (for C=1) is our total hinge-loss, summed over all the examples. Dividing by |S| we get our average hinge-loss, which is an upper bound on err_S(h). So, an upper bound on (2) is 1/|S| times the second part of the objective function. So, the two parts of the objective function are upper bounds on the two quantities we care about: overfitting and empirical error.