**Prerequisites: **Either 15-781/10-701/15-681
Machine Learning, or 15-750 Algorithms, or a
Theory/Algorithms background or a Machine Learning background.

**Text:**
*
An Introduction to Computational Learning Theory* by Michael Kearns
and Umesh Vazirani, plus papers and notes for topics not in the book.

**Office hours:** Email / stop by anytime!

- [Online learning survey] [Winnow paper] [Winnow vs Perceptron] [Handout on tail inequalities]
- Homework 1. Solutions.
- Homework 2. Solutions.
- Homework 3. Solutions.
- Homework 4. Solutions.
- Homework 4.5 (plan for your course project). Due March 22.
- Homework 5. Solutions.
- Homework 6. Solutions.

- 01/11: Introduction. PAC model and Occam's razor.
- 01/13: The Mistake-Bound model. Combining expert advice. Connections to info theory.
- 01/20: The Winnow algorithm.
- 01/25: The Perceptron Algorithm, margins, & intro to kernels.
- 01/27: Uniform convergence, tail inequalities (Chernoff/Hoeffding), VC-dimension I. [more notes]
- 02/01: VC-dimension II (proofs of main theorems).
- 02/03: Boosting I: Weak to strong learning, Schapire's original method. [notes from Shuchi Chawla's course]
- 02/15: Boosting II: Adaboost + connection to WM analysis + L_1 margin bounds.
- 02/17: Rademacher bounds and McDiarmid's inequality.
- 02/22: Support Vector Machines, properties of kernels, MB=>PAC, L_2 margin bounds.
- 02/24: Margins, kernels, and general similarity functions (L_1 and L_2 connection).
- 03/01: Maxent and maximum likelihood exponential models; connection to winnow.
- 03/03: Learning with noise and the Statistical Query model I.
- 03/15: Statistical Query model II: characterizing weak SQ-learnability.
- 03/17: Fourier-based learning and learning with Membership queries: the KM algorithm.
- 03/22: Fourier spectrum of decision trees and DNF. Also hardness of learning parities with kernels.
- 03/24: Learning Finite State Environments. (more notes)
- 03/29: Learning Finite State Environments, contd.
- 03/31: MDPs and reinforcement learning.
- 04/05: Offline->online optimization.
- 04/07: Bandit algorithms and sleeping experts.
- 04/12: Game theory I (zero-sum and general-sum games). [See also this survey article]
- 04/14: Game theory II (achieving low internal/swap regret. Congestion/exact-potential games.
- 04/19: Semi-supervised learning.

- Robert Williamson, John Shawe-Taylor, Bernhard Scholkopf, Alex Smola Sample Based Generalization Bounds. Gives tighter generalization bounds where instead of using "the maximum number of ways of labeling a set of 2m points" you can use "the number of ways of labeling your actual sample".
- See also a previous version of the course.