CMU 15-859(B), Spring 2012

MACHINE LEARNING THEORY

Avrim Blum

MW 1:30-2:50, GHC 4303


Course description: This course will focus on theoretical aspects of machine learning. We will examine questions such as: What kinds of guarantees can we prove about learning algorithms? Can we design algorithms for interesting learning tasks with strong guarantees on accuracy and amounts of data needed? What can we say about the inherent ease or difficulty of learning problems? Can we devise models that are both amenable to theoretical analysis and make sense empirically? Addressing these questions will bring in connections to probability and statistics, online algorithms, game theory, complexity theory, information theory, cryptography, and empirical machine learning research. Grading will be based on 6 homework assignments, class participation, a small class project, and a take-home final (worth about 2 homeworks). Students from time to time will also be asked to help with the grading of assignments.
[2010 version of the course]

Prerequisites: A Theory/Algorithms background or a Machine Learning background.

Text (recommended): An Introduction to Computational Learning Theory by Michael Kearns and Umesh Vazirani, plus papers and notes for topics not in the book.

Office hours: Email / stop by anytime!


Handouts


Lecture Notes & tentative plan

  1. 01/18: Introduction. PAC model and Occam's razor.
  2. 01/23: The Mistake-Bound model. Combining expert advice. Connections to info theory.
  3. 01/25: The Winnow algorithm.
  4. 01/30: The Perceptron Algorithm, margins, & intro to kernels plus Slides.
  5. 02/01: Uniform convergence, tail inequalities (Chernoff/Hoeffding), VC-dimension I. [more notes]
  6. 02/06: VC-dimension II (proofs of main theorems).
  7. 02/08: Boosting I: Weak to strong learning, Schapire's original method.
  8. 02/13: Boosting II: Adaboost + connection to WM analysis + L_1 margin bounds
  9. 02/15: Rademacher bounds and McDiarmid's inequality.
  10. 02/20: MB=>PAC, Support Vector Machines, L_2 margin uniform-convergence bounds.
  11. 02/22: Margins, kernels, and general similarity functions (L_1 and L_2 connection).
  12. 02/27: Active learning.
  13. 02/29: Semi-supervised learning.
  14. 03/05: Learning and property testing.
  15. 03/19: Learning with noise and the Statistical Query model I.
  16. 03/21: Statistical Query model II: characterizing weak SQ-learnability.
  17. 03/26: Fourier-based learning and learning with Membership queries: the KM algorithm.
  18. 03/28: Fourier spectrum of decision trees and DNF. Also hardness of learning parities with kernels.
  19. 04/02: Learning Finite State Environments.
  20. 04/04: MDPs and reinforcement learning.
  21. 04/09: Maxent and maximum likelihood exponential models; connection to winnow
  22. 04/11: Offline->online optimization.
  23. 04/16: Bandit algorithms and sleeping experts.
  24. 04/18: Game theory I (zero-sum and general-sum games). [paper on internal vs external regret ]
  25. 04/23: Game theory II (achieving low internal/swap regret. Congestion/exact-potential games.
  26. 04/25: Transfer learning.
  27. 04/30: Project presentations
  28. 05/02: Project presentations

Additional Readings & More Information