15-859(B) Machine Learning Theory 02/14/02
* Weak vs Strong learning, and Boosting
[no hwk today. Instead, start thinking about projects/presentations]
========================================================================
Today: take a look at some basic questions about the PAC model. This
will lead us into our next topic: boosting.
We defined the PAC model like this:
Def1: Alg A PAC-learns concept class C if for any c in C, any
distribution D, any epsilon, delta > 0, with probability 1-delta, A
produces a poly-time evaluatable hypothesis h with error at most
epsilon.
Goal is to do this with # examples and running time polynomial in
relevant parameters.
ISSUE #1: DO WE NEED TO SAY "FOR ALL DELTA"?
===========================================
What if we change the definition to say delta = 1/2. Or, even delta =
1 - 1/poly(n). I.e., there is *some* noticeable chance it succeeds.
Def2: same as Def1 but replace "any delta > 0" with "some delta <
1-1/poly(n)".
If C is learnable using Def2, is it also learnable using Def1?
Yes: Say we're given alg A that works using Def2 with chance of
success 1/n^c. We'll run it n^c * ln(3/delta) times, with epsilon/4
as error parameter. Pr(all failed) <= e^{-ln(3/delta)} = delta/3.
Now, draw a new sample of m points and test the hypotheses on it,
picking one whose observed error is < epsilon/2. Use Chernoff bounds:
Pr(the good hyp looks bad) < e^{-m*epsilon/12} Pr(a given bad hyp
looks good) < e^{-m*epsilon/8}
Want first quantity to be less than delta/3. Want second quantity to
be less than delta/(3*(# of hypotheses)). Say delta^2/poly(n).
Solve: m = O((1/epsilon)*log(n/delta)).
So, if A ran in polynomial running time, then new algorithm does too.
So, delta not so important. Can fix to 1/2 and still get same notion
of "learnability".
[Another way to do this argument: run it once, then test on test set.
If good halt. If not, repeat. Then argue that whp don't have to
repeat too many times.]
ISSUE #2: DO WE NEED TO SAY "FOR ALL EPSILON"?
=============================================
Def 3: Let's say that alg A WEAK-LEARNS class C if for all c in C, all
distributions D, THERE EXISTS epsilon, delta > 1/poly(n) s.t. A
achieves error 1/2 - epsilon with prob at least delta.
I.e., with some probability you get noticeable correlation.
Question: supposed we defined PAC model this way, does this change the
notion of what is learnable and what is not?
Answer, it doesn't. Given an algorithm that satisfies Def3, can
"boost" it to an algorithm that satisfies Def1.
First of all, we can handle the delta by testing on a test set, and if
it failed, we repeat. So, in the following, we'll ignore delta and
assume that each time, we get a hypothesis whose error is at most 1/2
- epsilon.
Boosting: preliminaries
-----------------------
We're going to have to use the fact that algorithm A works (in this
weak way) over *every* distribution. If you look at the
"distribution specific" version of this question, the answer is NO.
E.g., say your target function was "x is positive if x1 = 1 or if x
is a quadratic residue mod N (N is big)" (or insert your favorite hard
cryptographic property". Then, it's easy to
get error 25% over uniform random examples: you predict positive if
x1=1 and whatever you want otherwise. But you can't do any better.
The problem is that there is this hard core to the target function.
Boosting shows this situation is in a sense universal. If you have
some arbitrary learning algorithm that supposedly works well, we'll
view it as a black box and either boost up its accuracy as much as we
like, or else we'll find a distribution D where that algorithm gets
error > 1/2 - epsilon.
An easy case
------------
Suppose our weak-learning algorithm only makes one-sided error. For
instance, it gets all of the +'s right and at least 1/n^c of the -'s
right. (E.g., when learning a conjunction, greedy alg picks a variable
with this property. Or think of learning a convex region). Another
way to think of this is that when h(x)=0 it is correct, and h(x)=0 on
at least a 1/n^c fraction of the -'s. Then, boosting is easy:
We first find h_1 that gets at least 1/n^c of the -'s right
over distribution D. We then define D_2 to be D restricted to examples
x such that h_1(x) = 1 (i.e., throw out the -'s we got right) and
learn over that, etc.
Our final hypothesis is the AND of all the h_i(x)'s.
For boosting hypotheses that make 2-sided error, though, it's a little
more tricky. Let's first talk about Schapire's original method (the
one described in the textbook) and then after that we'll talk about
Adaboost (by Freund and Schapire) that does this more efficiently and
gives a more practical algorithm in the end.
Preliminaries:
--------------
It will make things simpler to add an assumption that A is choosing
hypotheses from some class of limited VC-dimension d, so that for an
appropriate-size sample we can assume empirical error is close to true
error for A's hypotheses. That way we can just talk about drawing one
large sample, and doing various things with the sample.
For instance, we can think of the above algorithm as: first pick a
large sample S. Then find h_1 (that says negative on at least 1/n^c
of the -'s). Then throw out examples from S where h_1(x)=0.
Then find h_2, etc.. This will continue for n^c*log(m) steps until
we've thrown out all the -'s. Just like greedy set cover.
Note: how much can taking conjunction of k hypotheses increase C[m]
by? At worst, by a power of k. So, taking the log, we need at most k
times more examples.
Boosting (Rob Schapire's original approach)
-------------------------------------------
To reduce notation, let's fix some weak error rate. Assume that we
have an algorithm A that over any distribution will get at least 70%
accuracy with high probability. Say we want to boost it up a bit
(which we can then apply recursively).
We start by running A on S, and getting hypothesis h_1. Say its
accuracy is 70%. Now we want a new distribution.
One try: only look at examples in S where h_1 predicts incorrectly and
try to learn over that. This DOESN'T work. Why?
Instead: we want to create a distribution where h_1 behaves like random
guessing. Specifically, let S_C be the set of examples in S on
which h_1 predicts correctly and S_I be the set on which h_1
predicts incorrectly. What we want is a distribution D_2 where
S_C and S_I both have equal weight. We can do that by creating a
non-uniform distribution over S, where each example in S_C gets
probability 1/(2|S_C|) and each example in S_I gets probability
1/(2|S_I|).
Another way to think of this. Let D_1[x] be the original probability
on x (i.e., 1/|S|). Then to create D_2 we reduced the probability on
S_C from 70% to 50%, so if x in S_C then D_2[x] = (5/7)D_1[x]. We
increased the probability on S_I from 30% to 50% so if x in S_I then
D_2[x] = (5/3)D_1[x].
Now we get a hypothesis h_2 with accuracy 70% on D_2.
Finally, let's feed into A the examples in S where h_1(x) != h_2(x).
We then get hypothesis h_3 with accuracy 70% over this set.
Then we put the three together by taking majority vote. Or, can think
of it this way: if h_1 and h_2 agree, then go with that, else go with
h_3.
Analysis of original scheme
---------------------------
What's the accuracy of the new hypothesis?
Easiest way is to draw a picture. Divide S into 4 regions: R_1 in
which h_1 = c and h_2 = c, R_2 in which h_1 = c and h_2 != c, R_3 in
which h_1 != c and h_2 != c, and R_4 in which h_1 != c and h_2 = c.
We predict correctly on R_1, incorrectly on R_3 and get 70% correct on
R_2 U R_4. Let's say that D_2[R_2] = gamma. By the fact that h_2 has
30% error on D_2 we know that D_2[R_3] = 0.3 - gamma. By definition of
D_2 we know D_2[R_1] = 1/2 - gamma, and D_2[R_4] = 0.2 + gamma.
Working back to our original uniform distribution over S, we get:
D_1[R_1] = 7/5 * (1/2 - gamma)
and
D_1[R_3] = 3/5 * (3/10 - gamma).
We can now work out our error rate over S as:
Pr[fail] = D_1[R_3] + (3/10)(1 - D_1[R_1] - D_1[R_3])
= (7/10)D_1[R_3] + (3/10)(1 - D_1[R_1])
= (7/10)[3/5 * (3/10 - gamma)] + (3/10)[3/10 + (7/5)gamma]
= (3/10)^2[(7/5) + 1]
which is approx 0.216
More generally, if we work this through, we get:
error <- error^2(3 - 2*error)
This is always a decrease assuming that our original error was
strictly between 0 and 1/2.
or can look at it this way. Let bias = accuracy - error = 1 - 2*error
then
new bias = (old bias) + 2(old bias)(old accuracy)(old error)