**Clarification to exercise 1:** Winnow, as described in class, is a
*conservative* algorithm. In other words, it only changes its
hypothesis when it makes a mistake. One could imagine changing the algorithm
to be non-conservative. For instance, it could perform its "halving step"
(cutting in half those weights w_i for which x_i = 1) on every negative
example, not just those on which it makes a mistake. Or, it could perform
its "doubling step" (doubling those weights w_i for which x_i = 1) on every
positive example, not just those on which it makes a mistake. Your
job is to show that this latter idea is "unsafe" in that it could cause the
algorithm to make an unlimited number of mistakes, even if the data is
consistent with some disjunction.