15-681 MACHINE LEARNING, Fall 1996

School of Computer Science, Carnegie Mellon University
Wean 3412, Tues & Thurs 12:00-1:20


Tom Mitchell (Wean 5309, office hours: Wed 3-4),
Avrim Blum (Wean 4107, office hours: Wed 10-11).

Teaching Assistant:

Scott Davies (Wean 5103, office hours: Mon 2-3)

Course textbook: Machine Learning, Tom Mitchell, McGraw Hill

Copies of the textbook can be picked up in Jean Harpley's office: Wean Hall 5313.


Machine Learning is concerned with computer programs that automatically improve their performance through experience. Machine Learning methods have been applied to problems such as learning to drive an autonomous vehicle, learning to recognize human speech, learning to detect credit card fraud, and learning strategies for game playing. This course covers the primary approaches to machine learning from a variety of fields, including inductive inference of decision trees, neural network learning, statistical learning methods, genetic algorithms, bayesian methods, explanation-based learning, and reinforcement learning. The course will also cover theoretical concepts such as inductive bias, the PAC and Mistake-Bound learning framework, Occam's Razor, uniform convergence, models of noise, and Fourier analysis. Programming assignments include experimenting with various learning problems and algorithms. This course is a combination upper-level undergraduate and introductory graduate course. CS Ph.D. students can obtain one core credit unit by arrangement with the instructor.

Here is a Course syllabus.


Assignment Updates

You should check the Assignment Update page periodically for important announcements regarding current assignments.

Lecture notes (postscript)

Note to people outside CMU

Feel free to use the slides and materials available online here. Please email Tom.Mitchell@cmu.edu or avrim@cs.cmu.edu with any corrections or improvements.

See also Fall 1995 version of this course, including midterm and final exam.