CMU 15-859(B), Spring 2010

MACHINE LEARNING THEORY

Avrim Blum

MW 3:00-4:20, GHC 4102


Course description: This course will focus on theoretical aspects of machine learning. We will examine questions such as: What kinds of guarantees can we prove about learning algorithms? Can we design algorithms for interesting learning tasks with strong guarantees on accuracy and amounts of data needed? What can we say about the inherent ease or difficulty of learning problems? Can we devise models that are both amenable to theoretical analysis and make sense empirically? Addressing these questions will bring in connections to probability and statistics, online algorithms, game theory, complexity theory, information theory, cryptography, and empirical machine learning research. Grading will be based on 6 homework assignments, class participation, a small class project, and a take-home final (worth about 2 homeworks). Students from time to time will also be asked to help with the grading of assignments.

Prerequisites: Either 15-781/10-701/15-681 Machine Learning, or 15-750 Algorithms, or a Theory/Algorithms background or a Machine Learning background.

Text: An Introduction to Computational Learning Theory by Michael Kearns and Umesh Vazirani, plus papers and notes for topics not in the book.

Office hours: Email / stop by anytime!


Handouts


Lecture Notes & tentative plan

  1. 01/11: Introduction. PAC model and Occam's razor.
  2. 01/13: The Mistake-Bound model. Combining expert advice. Connections to info theory.
  3. 01/20: The Winnow algorithm.
  4. 01/25: The Perceptron Algorithm, margins, & intro to kernels.
  5. 01/27: Uniform convergence, tail inequalities (Chernoff/Hoeffding), VC-dimension I. [more notes]
  6. 02/01: VC-dimension II (proofs of main theorems).
  7. 02/03: Boosting I: Weak to strong learning, Schapire's original method. [notes from Shuchi Chawla's course]
  8. 02/15: Boosting II: Adaboost + connection to WM analysis + L_1 margin bounds.
  9. 02/17: Rademacher bounds and McDiarmid's inequality.
  10. 02/22: Support Vector Machines, properties of kernels, MB=>PAC, L_2 margin bounds.
  11. 02/24: Margins, kernels, and general similarity functions (L_1 and L_2 connection).
  12. 03/01: Maxent and maximum likelihood exponential models; connection to winnow.
  13. 03/03: Learning with noise and the Statistical Query model I.
  14. 03/15: Statistical Query model II: characterizing weak SQ-learnability.
  15. 03/17: Fourier-based learning and learning with Membership queries: the KM algorithm.
  16. 03/22: Fourier spectrum of decision trees and DNF. Also hardness of learning parities with kernels.
  17. 03/24: Learning Finite State Environments. (more notes)
  18. 03/29: Learning Finite State Environments, contd.
  19. 03/31: MDPs and reinforcement learning.
  20. 04/05: Offline->online optimization.
  21. 04/07: Bandit algorithms and sleeping experts.
  22. 04/12: Game theory I (zero-sum and general-sum games). [See also this survey article]
  23. 04/14: Game theory II (achieving low internal/swap regret. Congestion/exact-potential games.
  24. 04/19: Semi-supervised learning.

Additional Readings & More Information